aws-load-balancer-controller icon indicating copy to clipboard operation
aws-load-balancer-controller copied to clipboard

Question: one NLB with multiple listeners fully managed from k8s/YAML - possible ?

Open przemolb opened this issue 2 years ago • 30 comments

When creating new NLB in AWS GUI we can add several listeners for one NLB (cost savings !) and redirect traffic from different ports to different target groups. When defining new k8s Service to create new NLB (using annotations like service.beta.kubernetes.io/aws-load-balancer-* ) is it possible to define multiple listeners at the k8s level ? In general: is it possible to create new NLB with multiple listeners using just k8s YAMLs ?

przemolb avatar Sep 19 '21 07:09 przemolb

@przemolb, you can add multiple ports to the service spec which will translate to multiple listeners on the NLB.

kishorj avatar Sep 19 '21 19:09 kishorj

It looks great. But how to use different selector for different NLB listeners? My current config is:

apiVersion: v1
kind: Service
metadata:
  name: nlb-testing
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: external
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"

spec:
  type: LoadBalancer
  ports:
    - name: test1
      port: 85
      targetPort: 85
      protocol: TCP
    - name: test2
      port: 84
      targetPort: 84
      protocol: TCP
  selector:
    app: hello-world

And now it adds my application to two listeners. So I want to have the ability to define different selectors per port? How can I achive this? Thank you

infakt-HNP avatar Sep 20 '21 09:09 infakt-HNP

You could use the targetgorupbinding. If you use tgb, you 'd need to create the NLB and its target groups manually. You can use targetgroupbinding to associate a NodePort (for instance targets) or ClusterIP (for IP targets) type services with the NLB target groups.

kishorj avatar Sep 20 '21 18:09 kishorj

@kishorj can we achieve this only with targetgorupbinding ? If yes then it requiores manual operations (create NLB + target groups). My question was: "create new NLB with multiple listeners using just k8s YAMLs". I am assuming at the moment it is not possible ?

przemolb avatar Sep 20 '21 21:09 przemolb

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Dec 19 '21 21:12 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Jan 18 '22 22:01 k8s-triage-robot

Is it this feature in any roadmap ?

przemolb avatar Jan 19 '22 15:01 przemolb

/remove-lifecycle rotten

nthienan avatar Jan 19 '22 16:01 nthienan

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Apr 19 '22 17:04 k8s-triage-robot

/remove-lifecycle stale

przemolb avatar Apr 20 '22 13:04 przemolb

To clarify: the whole point of usin AWS Load Balancer controller is to manage ALBs/NLBs using k8s/YAML. In this particular case it is not possible and manual work is required.

przemolb avatar Apr 20 '22 14:04 przemolb

we might be able to support this when we support Gateway API. the K8s Service API don't have such flexibility and we don't want to add something like serviceGroup.

M00nF1sh avatar Apr 21 '22 16:04 M00nF1sh

@M00nF1sh wouldn't it be possible to implement it in a similar way to what has been done for ALBs? In that case (if I read the docs correctly), using the same alb.ingress.kubernetes.io/group.name annotation causes multiple k8s ingresses to be created as rules in a single ALB. I would expect a similar annotation (say service.beta.kubernetes.io/group.name) to be available on services too... Or is that what you mean when you say you don't want to ass something like serviceGroup? Just trying to figure it out, it's honestly not a critique :)

nfeltrin-dkb avatar May 02 '22 07:05 nfeltrin-dkb

I think this is a needed feature, in our case, ALB isn't a feasible option, as our services use simple TCP / TLS. It makes no sense having to deploy one nlb per service, is a waste of resources. Adding some kind of service.beta.kubernetes.io/group.name would be perfect :)

TheMatrix97 avatar May 17 '22 10:05 TheMatrix97

Anyone can change this issue to feature type instead of question?

vumdao avatar Jul 21 '22 05:07 vumdao

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Oct 19 '22 06:10 k8s-triage-robot

/remove-lifecycle stale

przemolb avatar Oct 19 '22 10:10 przemolb

FYI, this is possible in ingress-nginx ingress controller. See docs. I agree that it would be nice to see in AWS Load Balancer controller.

jslatterycnvrtr avatar Nov 29 '22 08:11 jslatterycnvrtr

One more use case is we need to expose services to other VPCs by using endpoint service, which NLB is required. But currently we can only create one NLB for each service, which is quite a waste. @jslatterycnvrtr mentioned nginx ingress controller, which we also have successful experience in previous implementations. But when we try to migrate to AWS native controller, we find this feature missing.

revilwang avatar Jan 16 '23 06:01 revilwang

I tried to create two services with name: service.beta.kubernetes.io/aws-load-balancer-name. It creates the LB and target groups correctly, but when adding the second targetgroup, it clears the first targetgroup from the LB.

I imagine that it would be fairly straightforward to keep track of the targetgroups created by the controller, and not remove them from the LB.

tuomari avatar Jan 31 '23 11:01 tuomari

I tried to create two services with name: service.beta.kubernetes.io/aws-load-balancer-name. It creates the LB and target groups correctly, but when adding the second targetgroup, it clears the first targetgroup from the LB.

I imagine that it would be fairly straightforward to keep track of the targetgroups created by the controller, and not remove them from the LB.

This kind of behavior is pretty common with k8s resources. Controllers are trying to "reconcile" desired state and actual state. So, the usual pattern is loop through each object that represents desired states, and ensure actual state adheres to that. What you are seeing is exactly that:

  1. service1 -> configures lb1
  2. service2 -> configures lb1

and it will likely continue to bounce back and forth between states. It doesn't know what there's another service out there that is also configuring it, and that these 2 states should be merged.

It does sound like that might be an interesting feature though. E.g.

service.beta.kubernetes.io/aws-load-balancer-merge-services: "foo,bar"

The questions I would have on behavior is:

  1. What should be done if one of these services doesn't exist?
  2. Do all services in the group need to indicate their willingness to be merge with each other (is coordination required)? 2a. If coordination is required, what happens if there's misalignment?

ghostsquad avatar Feb 04 '23 21:02 ghostsquad

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar May 05 '23 21:05 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Jun 04 '23 21:06 k8s-triage-robot

/remove-lifecycle rotten

gtowey-air avatar Jun 05 '23 16:06 gtowey-air

Are there any updating regarding this functionality? We would very much like to be able to provision a single NLB with listeners to multiple services

boris-de-groot avatar Sep 06 '23 09:09 boris-de-groot

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 29 '24 06:01 k8s-triage-robot

/remove-lifecycle stale

Faustinekitten avatar Jan 29 '24 08:01 Faustinekitten

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar May 14 '24 17:05 k8s-triage-robot