aws-load-balancer-controller
aws-load-balancer-controller copied to clipboard
Resource tags cannot be modified after creation when aws-load-balancer-type is set to "nlb"
Is your feature request related to a problem? When attempting to update resource tags on existing load balancers, we are unable to mutate the values.
Debugging this with AWS, we determined that if the service.beta.kubernetes.io/aws-load-balancer-type value isn't set to external or nlb-ip, the load balancer controller will ignore the request. The in-tree service controller I think would modify the values, but it is passing all responsibility over to the aws-load-balancer-controller.
Describe the solution you'd like
When modifying resource tags with the aws-load-balancer-type set to nlb, the resource tags are updated, without having to recreate the nlbs.
Describe alternatives you've considered We've considered modifying the aws-load-balancer-type to 'nlb-ip', but the documentation says this would lead to a resource leak.
Thank you for bringing this to our attention. We are currently attempting to reproduce this issue on our end. /kind/feature
Thanks - a key thing is that the additional tags annotation has to be mutated after being set the first time. It'll work once, basically.
Not sure if this is related. I have tried to add the annotation
service.beta.kubernetes.io/aws-load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=bucketnamehere,deletion_protection.enabled=true
on an existing service and the controller did not add the access_logs configurations. I then tried to use the deprecated annnotations, and the controller immediately added the access log configuration.
service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: bucketnamehere
Wouldn't be surprised, the new code only gets invoked under very specific cirumstances, otherwise it goes off to the in-tree code.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.