aws-load-balancer-controller
aws-load-balancer-controller copied to clipboard
Question: one NLB with multiple listeners fully managed from k8s/YAML - possible ?
When creating new NLB in AWS GUI we can add several listeners for one NLB (cost savings !) and redirect traffic from different ports to different target groups.
When defining new k8s Service to create new NLB (using annotations like service.beta.kubernetes.io/aws-load-balancer-*
) is it possible to define multiple listeners at the k8s level ?
In general: is it possible to create new NLB with multiple listeners using just k8s YAMLs ?
@przemolb, you can add multiple ports to the service spec which will translate to multiple listeners on the NLB.
It looks great. But how to use different selector for different NLB listeners? My current config is:
apiVersion: v1
kind: Service
metadata:
name: nlb-testing
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
spec:
type: LoadBalancer
ports:
- name: test1
port: 85
targetPort: 85
protocol: TCP
- name: test2
port: 84
targetPort: 84
protocol: TCP
selector:
app: hello-world
And now it adds my application to two listeners. So I want to have the ability to define different selectors per port? How can I achive this? Thank you
You could use the targetgorupbinding. If you use tgb, you 'd need to create the NLB and its target groups manually. You can use targetgroupbinding to associate a NodePort (for instance targets) or ClusterIP (for IP targets) type services with the NLB target groups.
@kishorj can we achieve this only with targetgorupbinding ? If yes then it requiores manual operations (create NLB + target groups). My question was: "create new NLB with multiple listeners using just k8s YAMLs". I am assuming at the moment it is not possible ?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Is it this feature in any roadmap ?
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
To clarify: the whole point of usin AWS Load Balancer controller is to manage ALBs/NLBs using k8s/YAML. In this particular case it is not possible and manual work is required.
we might be able to support this when we support Gateway API. the K8s Service API don't have such flexibility and we don't want to add something like serviceGroup.
@M00nF1sh wouldn't it be possible to implement it in a similar way to what has been done for ALBs? In that case (if I read the docs correctly), using the same alb.ingress.kubernetes.io/group.name
annotation causes multiple k8s ingresses to be created as rules in a single ALB. I would expect a similar annotation (say service.beta.kubernetes.io/group.name
) to be available on services too... Or is that what you mean when you say you don't want to ass something like serviceGroup?
Just trying to figure it out, it's honestly not a critique :)
I think this is a needed feature, in our case, ALB isn't a feasible option, as our services use simple TCP / TLS. It makes no sense having to deploy one nlb per service, is a waste of resources. Adding some kind of service.beta.kubernetes.io/group.name
would be perfect :)
Anyone can change this issue to feature type instead of question?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
FYI, this is possible in ingress-nginx ingress controller. See docs. I agree that it would be nice to see in AWS Load Balancer controller.
One more use case is we need to expose services to other VPCs by using endpoint service, which NLB is required. But currently we can only create one NLB for each service, which is quite a waste. @jslatterycnvrtr mentioned nginx ingress controller, which we also have successful experience in previous implementations. But when we try to migrate to AWS native controller, we find this feature missing.
I tried to create two services with name: service.beta.kubernetes.io/aws-load-balancer-name
. It creates the LB and target groups correctly, but when adding the second targetgroup, it clears the first targetgroup from the LB.
I imagine that it would be fairly straightforward to keep track of the targetgroups created by the controller, and not remove them from the LB.
I tried to create two services with name:
service.beta.kubernetes.io/aws-load-balancer-name
. It creates the LB and target groups correctly, but when adding the second targetgroup, it clears the first targetgroup from the LB.I imagine that it would be fairly straightforward to keep track of the targetgroups created by the controller, and not remove them from the LB.
This kind of behavior is pretty common with k8s resources. Controllers are trying to "reconcile" desired state and actual state. So, the usual pattern is loop through each object that represents desired states, and ensure actual state adheres to that. What you are seeing is exactly that:
- service1 -> configures lb1
- service2 -> configures lb1
and it will likely continue to bounce back and forth between states. It doesn't know what there's another service out there that is also configuring it, and that these 2 states should be merged.
It does sound like that might be an interesting feature though. E.g.
service.beta.kubernetes.io/aws-load-balancer-merge-services: "foo,bar"
The questions I would have on behavior is:
- What should be done if one of these services doesn't exist?
- Do all services in the group need to indicate their willingness to be merge with each other (is coordination required)? 2a. If coordination is required, what happens if there's misalignment?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
Are there any updating regarding this functionality? We would very much like to be able to provision a single NLB with listeners to multiple services
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale