aws-load-balancer-controller
aws-load-balancer-controller copied to clipboard
[k8s] Shared Backend SecurityGroup for LoadBalancer
Hello, It may sounds like a question, instead of request, but let we see. What's the problem we ( I ) experience is AWS LBC creates by default backed SG with the following tags:
name: k8s-traffic-<cluster_name>-<hash_of_cluster_name> tags: elbv2.k8s.aws/cluster: <cluster_name> elbv2.k8s.aws/resource: backend-sg
But i didn't find a way how I can add more tags to the BE security group through AWS LBC, with annotation or some additional flag. Is there a way to do it with some parameter, or these tags can not be controlled outside of the LBC ?
I would like to have an option to add/modify tags to the BE SG provisioned by LBC.
The only alternative for now is to create a script that goes over BE SG and tag them, but as you already know this is not convenient way, since it's very dynamic.
Regards
Update: Is there a way ingress tags to have precedence over default tags from AWS LBC ?
@vivanov83, current the default tags take the highest priority Please refer to live doc for more details: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.7/guide/ingress/ingress_class/#spectags
@vivanov83
The original answer by oliviassss is not accurate.
You can add additional tags to the "shared backend security group" via the --default-tags controller flag.
The result tagging on it will be a combination of tags via "--default-tags" and "elbv2.k8s.aws/cluster: <cluster_name>", "elbv2.k8s.aws/resource: backend-sg".
Note, tags specified via --default-tags will be applied to all other resources as well(alb/nlb/targetGroups/etc)
I meant the tags specified via controller-level flag --default-tags will have the highest priority if tags are specified through controller flag, annotation and ingress class spec. Sorry for the confusion :p
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.