aws-load-balancer-controller
aws-load-balancer-controller copied to clipboard
Allow service.beta.kubernetes.io/aws-load-balancer-target-group-attributes to specific port for nlb
Is your feature request related to a problem? We would like to apply target-group-attributes (proxy-protocol) to only one of the ports (target groups) in the service object for the NLB. Currently based on reading the code I don't see how to do it
Describe the solution you'd like NLB allows attributes such as proxy-protocol to be applied to individual target groups and we would like to configure different target groups differently for NLB. I believe this may be possible for ALB by doing an ingress object and then NodePort service objects with different annotations for target groups but I don't think this is possible to do for NLB.
Describe alternatives you've considered I don't have an alternative currently.
one workaround for now is to create the TargetGroups yourself(so you can customize the attributes), and the use the targetgroupbinding resource to bind the Service to the TargetGroups(to dynamically update target group targets).
But i admit that this is a valid feature request to allow customer fully control ALB/NLB behavior from Kubernetes. We'll find a better model to support such configuration flexibility when we add gateway API support.
We could use this feature also, our use case is we have the standard http ports with it on but with ingress nginx behind it. I added a nginx streaming port (https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/). This port I need it disabled. Right now for that target group, I just manually disabled it.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.