aws-load-balancer-controller
aws-load-balancer-controller copied to clipboard
allow copying labels as tags
Is your feature request related to a problem? we need our aws lbs to be tagged with the team/project that created them
Describe the solution you'd like our services are already labeled with the team they have, so I'd like to make aws-lbc reuse that instead of having to tell every team how and why to configure aws-load-balancer-additional-resource-tags
the logic would fit neatly into buildAdditionalResourceTags with a new cli option like copy-labels-to-resource-tags=team,project,foo,bar
I think I can make a PR since this is fairly simple, but wanted feedback on if this is merge-able first
Describe alternatives you've considered make everyone copy-paste and ensure the copy-paste stays in sync
you can leverage the existing svc annotation to add additional tags: service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags
check: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.8/guide/service/annotations/#aws-resource-tags
@grosser We would need security reviews to accept such PRs. AWS Tags are security related(due to it can be used for tag based auth), thus we cannot blindly copy over all labels.
Personally I'm open for proposed feature which provides a label whitelist, but this seems low priority in my mind as you can already specify it via the aws-load-balancer-additional-resource-tags annotation. (which seems a low effort if you define ingress/services via some automatically tools).
BTW, have you considered automatically generate the "aws-load-balancer-additional-resource-tags" annotation via a webhook for your team(or some manifest generation tools), instead of embed this functionality into the controller itself?
- webhook could work, but involves parsing existing and adding from labels, but not too hard
- if it's a security issue then why is aws-load-balancer-additional-resource-tags not a security issue ? ... it does not have a whitelist either
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.