aws-load-balancer-controller icon indicating copy to clipboard operation
aws-load-balancer-controller copied to clipboard

a new NLB with a name that's used before, overrides the existing NLB configurations

Open eslam-gomaa opened this issue 1 year ago • 9 comments

Describe the bug A concise description of what the bug is.

a new NLB with a name that's used before, overrides the existing NLB configurations .. making the currently used port not reachable.

Steps to reproduce

create 2 NLBs with the same name

Expected outcome A concise description of what you expected to happen.

Should fail if the the NLB's name is already used.

Environment

Creating NLBs with K8s LoadBalancer services.

  • AWS Load Balancer controller version: v2
  • Kubernetes version
  • Using EKS (yes/no), if so version?

Additional Context:

More about my use case;

  • I created a new set of NLBs (with the same name of existing NLBs)
  • that overriden the existing NLBs configurations, resulting in making the NLBs not reachable on the preconfigured port anymore.

In the screenshoot, both the highlighted K8s svc LBs have the same names, but only listening on the new service port "9096" .. making "9095" not reachable.

image

eslam-gomaa avatar Feb 05 '23 22:02 eslam-gomaa

https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2533 related

eslam-gomaa avatar Feb 05 '23 22:02 eslam-gomaa

/kind bug we should check whether ELBv2 CreateLoadBalancer API returns an existing LB's ARN when create a new LB with the existing LB's name. (what's the behavior if same settings, e.g. tags is provided, and what's the behavior if different settings tags is provided) And if that's the case, when using the LB name feature(or we always do the validation), we should check whether a existing LB exists with same name and validate the tags.

M00nF1sh avatar Feb 08 '23 23:02 M00nF1sh

Hi everyone,

I find myself in the same situation of being able to reuse the NLB with multiple services. I need to expose approximately 1800 services from my EKS cluster, and for orchestrating these services, I have a wrapper that manages the creation of deployments and services for them.

If the integration with NLB were similar to ALB's, I would only need to integrate with the Kubernetes API, which would make the management of everything very convenient. Otherwise, I would need to integrate with the AWS API to manage the creation of NLBs, target groups, associate the target group bindings, and so on.

After analyzing the code, I have noticed that the loop responsible for cleaning up the old listeners is located in:

  • pkg/deploy/elbv2/listener_synthesizer.go (lines 61-65)
matchedResAndSDKLSs, unmatchedResLSs, _ := matchResAndSDKListeners(resLSs, sdkLSs)
// for _, sdkLS := range unmatchedSDKLSs {
// 	if err := s.lsManager.Delete(ctx, sdkLS); err != nil {
// 		return err
// 	}
// }

By commenting out the loop and build the controller, I have managed to reuse the NLB as expected. However, I would prefer a solution that does not break future updates or the correct functionality of the controller.

Thank you, Best regards!

rdcarrera avatar Jun 22 '23 15:06 rdcarrera

/kind good-first-issue

M00nF1sh avatar Aug 11 '23 18:08 M00nF1sh

@M00nF1sh: The label(s) kind/good-first-issue cannot be applied, because the repository doesn't have them.

In response to this:

/kind good-first-issue

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Aug 11 '23 18:08 k8s-ci-robot

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 26 '24 06:01 k8s-triage-robot

/remove-lifecycle stale

tculp avatar Jan 29 '24 17:01 tculp

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Apr 28 '24 18:04 k8s-triage-robot

/remove-lifecycle stale

tculp avatar Apr 29 '24 14:04 tculp