aws-load-balancer-controller
aws-load-balancer-controller copied to clipboard
Tags annotation not independent
Describe the bug
Right now, if two Ingresses share an ALB, then both of them need an identical alb.ingress.kubernetes.io/tags
annotation. I want to have different Name
tags for each Ingress, to be able to see which ALB rule belongs to which microservice in the AWS Load Balancer WebUI in the first column of the "Listener rules" view.
However I get this error:
{"level":"error","ts":"2023-07-26T12:35:28Z","msg":"Reconciler error","controller":"ingress","object":{"name":"mynamespace-myenv"},"namespace":"","name":"mynamespace-myenv","reconcileID":"9c3fa25f-4821-442a-99d6-55398d765aa6","error":"conflicting tag Name: mynamespace-myenv | mynamespace-myenv-mymicroservicename"}
This seems related to https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/1940 which explains that ALB,TG and SG all need to have the same Tags ... however there is only one SG per ALB, but there can be many Ingresses per ALB.
So essentially I would like to see mynamespace-myenv-mymicroservicename
in the "Name tag" column of the AWS Console "Listener rules" table, instead of mynamespace-myenv
for each of these dozens of rules I have there.
Steps to reproduce Create 2 Ingresses with diverging tag annotations.
Expected outcome Allow different Ingresses to have diverging Tag annotations. Meanwhile document that all ingresses need to have identical Tag annotations if they share an ALB.
Environment
- AWS Load Balancer controller version: v2.5.2
- Kubernetes version: 1.27
- Using EKS (yes/no), if so version? yes, 1.27
Additional Context:
@stephan242, hi, unfortunately right now we do not allow multiple values for the same key for ingress group and agree we can improve here. /kind feature
@oliviassss thanks for acknowledging. However I would argue this is a bug, not a feature request, since the documentation states that the MergeBehavior
of alb.ingress.kubernetes.io/tags
is Merge
not Exclusive
in the Annotations table, so apparently it is working differently to how it was intended to work.
@stephan242
Currently the tags are for all resources, including the loadbalancer(which is shared), thus we require tags are not conflicting with each other.
What if we additional tag the rules with your ingress's name&namespace, will it suit your need? (however you won't be able to customize the tag key and tag value). another option i can think of is introduce another "tags" annotation that applies to "ingress" specific resources (e.g. targetGroup and listener rules), but it seems a overkill.
@M00nF1sh thanks for looking into this, much appreciated!
What if we additional tag the rules with your ingress's name&namespace, will it suit your need? (however you won't be able to customize the tag key and tag value).
If the Ingress name ends up in the AWS Name
tag of the ALB rule that would suit my need and would make a lot of sense to me, I wouldn't need to be able to override that. Not sure that is true for others though, they might do different stuff with their Names ...
About adding a hard coded Namespace
tag as well that might turn out more complicated, since I'm using Cloudposse's Null Label module, which adds a Namespace
tag to all my other AWS resources, which has nothing to do with a K8s Namespace
per se, so I could foresee issues there if they happen to not be aligned. But to be fair, you taking over management of the AWS Name
tag alone would be a huge win already and solve all my needs.
another option i can think of is introduce another "tags" annotation that applies to "ingress" specific resources (e.g. targetGroup and listener rules), but it seems a overkill.
Also fine with that option, I don't think it's necessarily overkill (if you have dozens of rules in the ALB it's a real drag to find the right one in the UI if their names are all identical), you could even make it backwards compatible by filling it with the already existing ones by default.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale