aws-load-balancer-controller
aws-load-balancer-controller copied to clipboard
a new NLB with a name that's used before, overrides the existing NLB configurations
Describe the bug A concise description of what the bug is.
a new NLB with a name that's used before, overrides the existing NLB configurations .. making the currently used port not reachable.
Steps to reproduce
create 2 NLBs with the same name
Expected outcome A concise description of what you expected to happen.
Should fail if the the NLB's name is already used.
Environment
Creating NLBs with K8s LoadBalancer services.
- AWS Load Balancer controller version: v2
- Kubernetes version
- Using EKS (yes/no), if so version?
Additional Context:
More about my use case;
- I created a new set of NLBs (with the same name of existing NLBs)
- that overriden the existing NLBs configurations, resulting in making the NLBs not reachable on the preconfigured port anymore.
In the screenshoot, both the highlighted K8s svc LBs have the same names, but only listening on the new service port "9096" .. making "9095" not reachable.
https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2533 related
/kind bug we should check whether ELBv2 CreateLoadBalancer API returns an existing LB's ARN when create a new LB with the existing LB's name. (what's the behavior if same settings, e.g. tags is provided, and what's the behavior if different settings tags is provided) And if that's the case, when using the LB name feature(or we always do the validation), we should check whether a existing LB exists with same name and validate the tags.
Hi everyone,
I find myself in the same situation of being able to reuse the NLB with multiple services. I need to expose approximately 1800 services from my EKS cluster, and for orchestrating these services, I have a wrapper that manages the creation of deployments and services for them.
If the integration with NLB were similar to ALB's, I would only need to integrate with the Kubernetes API, which would make the management of everything very convenient. Otherwise, I would need to integrate with the AWS API to manage the creation of NLBs, target groups, associate the target group bindings, and so on.
After analyzing the code, I have noticed that the loop responsible for cleaning up the old listeners is located in:
- pkg/deploy/elbv2/listener_synthesizer.go (lines 61-65)
matchedResAndSDKLSs, unmatchedResLSs, _ := matchResAndSDKListeners(resLSs, sdkLSs)
// for _, sdkLS := range unmatchedSDKLSs {
// if err := s.lsManager.Delete(ctx, sdkLS); err != nil {
// return err
// }
// }
By commenting out the loop and build the controller, I have managed to reuse the NLB as expected. However, I would prefer a solution that does not break future updates or the correct functionality of the controller.
Thank you, Best regards!
/kind good-first-issue
@M00nF1sh: The label(s) kind/good-first-issue
cannot be applied, because the repository doesn't have them.
In response to this:
/kind good-first-issue
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale