aws-load-balancer-controller
aws-load-balancer-controller copied to clipboard
Add support for selecting a Target Group using AWS resource tags
Is your feature request related to a problem?
Provide an alternative tag-based method of selecting the Target Group resource in the TargetGroupBinding
config.
As it stands, the only currently supported method for selecting a target group to bind against is hardcoding its ARN to the resource config.
Unfortunately, it's common to see K8s config written to VCS systems, and hardcoding unique IDs that are randomly generated can not be reproduced when re-created/substituted can be a letdown. Especially when dealing with highly dynamic and volatile environments that are managed outside of Kubernetes.
Describe the solution you'd like
I would like an alternative method of configuring the TargetGroupBinding resource, which doesn't require a Target Group's ARN, but takes a dictionary of tags to search/filter/match against a Target Group resource.
Describe alternatives you've considered
Not commit my TargetGroupBinding config to a VCS system and instead manage its lifecycle using the same tools responsible for provisioning our ALB Target Groups. I'm not a fan of this, as I would then be introducing multiple ways of configuring/managing K8s config/resources.
@Lngramos thanks for creating this feature request. Would you help us understand the desired behavior?
- Is the TargetGroup selected by tags already created during the TargetGroupBinding creation?
- What's the expected behavior if you removed the tags on a previous selected TargetGroup?
- What if the original TargetGroup's tags is deleted, and the tags is applied to another TargetGroup.
- What's the expected behavior if multiple TargetGroup are been selected(i.e. have matching tags?
Hi @M00nF1sh
Is the TargetGroup selected by tags already created during the TargetGroupBinding creation?
Yes, ideally yes. If there were no valid target group to bind to, I'd expect the controller to keep checking for matches as part of its reconciliation.
What's the expected behavior if you removed the tags on a previous selected TargetGroup?
I would expect the target instances to be deregistered from the target group when the controller runs its next reconciliation cycle.
What if the original TargetGroup's tags is deleted, and the tags is applied to another TargetGroup.
The controller should swap them, deregister the target instances from the previously targetted old target group and register them to the new target group.
What's the expected behavior if multiple TargetGroup are been selected(i.e. have matching tags?
An excellent question on which I don't have a fully formed opinion on yet. Let me sleep on this and do some digging into some of the technical consequences this can produce, as I'm not sure whether it could be a problem or else.
Slightly related follow-up question to this, do you or anyone know how the controller currently handles two TargetGroupBinding with matching IDs? (I'll see if I can test this out in case not)
What's the expected behavior if multiple TargetGroup are been selected(i.e. have matching tags?
The two behaviors that would make sense (to me, at least) are:
Set them as weighted target groups for each rule, with equal traffic going to each target group, unless a more specific action
has been specified for splitting traffic on that rule.
or
Don't use any of them and log an error stating only one target group can be used.
Any other behavior I've thought of (i.e. using which TargetGroup was created first or most recently, or alphabetically, or which ever the API returns first, etc) would introduce things that are difficult to troubleshoot.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
@Lngramos Although not perfect, I believe https://github.com/kubernetes-sigs/aws-load-balancer-controller/pull/2655 might help you. One can use it today here: marcosdiez/aws-alb-ingress-controller:20220524-1006
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
So... although this PR would be useful for others, I don't need it anymore in none of my clients. Instead, we are following this approach: https://github.com/marcosdiez/presentations/tree/master/2022-10-21-k8s-aws-alb-terraform-no-helm Therefore I will not finish the PR.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
On Mar 1, 2024, at 05:41, Kubernetes Triage Robot @.***> wrote:
/remove-lifecycle rotten
Any plans to add this capability to a roadmap?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale