cloud-provider-aws
                                
                                 cloud-provider-aws copied to clipboard
                                
                                    cloud-provider-aws copied to clipboard
                            
                            
                            
                        Adding tag annotations on service manifest for NLB does not trigger update on AWS
BUG REPORT: /kind bug
What happened: Adding this annotation to an existing service: {"metadata":{"annotations":{"service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags":"team=devops,env=qa"}}} did not trigger an update on AWS ELB (no new tags) if it is a NLB.
What you expected to happen: Expected to see those 2 new tags under the existing ELB: team=devops,env=qa
How to reproduce it (as minimally and precisely as possible):
Create a service without annotation Add the annotation {"metadata":{"annotations":{"service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags":"team=devops,env=qa"}}} No tags on AWS Anything else we need to know?: Creating a new service WITH the annotation will create the expected tags on AWS.
Environment:
Kubernetes version (use kubectl version): 1.18.3 Cloud provider or hardware configuration**: AWS OS (e.g. from /etc/os-release): Debian GNU/Linux 8 (jessie) Install tools: Others: Using Kops
@HoJinKind: The label(s) sig/aws cannot be applied, because the repository doesn't have them
In response to this:
BUG REPORT: /kind bug /sig aws
What happened: Adding this annotation to an existing service: {"metadata":{"annotations":{"service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags":"team=devops,env=qa"}}} did not trigger an update on AWS ELB (no new tags) if it is a NLB.
What you expected to happen: Expected to see those 2 new tags under the existing ELB: team=devops,env=qa
How to reproduce it (as minimally and precisely as possible):
Create a service without annotation Add the annotation {"metadata":{"annotations":{"service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags":"team=devops,env=qa"}}} No tags on AWS Anything else we need to know?: Creating a new service WITH the annotation will create the expected tags on AWS.
Environment:
Kubernetes version (use kubectl version): 1.18.3 Cloud provider or hardware configuration**: AWS OS (e.g. from /etc/os-release): Debian GNU/Linux 8 (jessie) Install tools: Others: Using Kops
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Hello, I also faced the same issue where tags are not being updated for the service when using the annotation aws-load-balancer-additional-resource-tags if it is NLB. However they are being added to the Loadbalancer if it is CLB.
So, I looked into the code responsible for ensuring an NLB through service spec of type LoadBalancer. Firstly, method responsible for ensuring a NLB is ensureLoadbalancerv2() method [1]. If we notice here [2], before the load balancer is being provisioned, it fetches the tags from annotations using getLoadBalancerAdditionalTags() method and adds them to part of createRequest [3] inside if block which is used finally to create a load balancer [4].
Note that the execution comes into the if block [5] only if the condition (if loadBalancer == nil) is met, which indicates only the first time when a new load balancer is being provisioned.
However if the load balancer is already created, I do not see any action to apply added or removed tags that can be decided by simply comparing the output of describe tags on the NLB and getLoadBalancerAdditionalTags() is not being called from within the else block [6] which is responsible for syncing the changes if load balancer already exists.
Implementation of getLoadBalancerAdditionalTags() method can be found here [7] which is responsible for converting the comma separated list of key-value pairs in the ServiceAnnotationLoadBalancerAdditionalTags annotation and returning it as a map. ServiceAnnotationLoadBalancerAdditionalTags annotation is a constant for service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags [8]
This led me to believe that the reason on why the tags are not being updated is due to no underlying code to handle this scenario. Since the tag creation method is being called only during creation of load balancer for the first time, the workaround would be to create the new service which would work as all the tags are fetched by getLoadBalancerAdditionalTags() method.
======================= ***** Why tags are being updated for CLB classic load balancer *****
Now coming into the reason on why it is working for load balancer of type CLB, the method responsible for ensuring a CLB load balancer is ensureLoadbalancer()[9]. Then, getLoadBalancerAdditionalTags() method is being called [10] inside if block and then subsequently the createRequest and CreateLoadbalancer methods. This is responsible for adding tags to the load balancer during creation time.
However, in the else block [11] which is responsible for syncing changes to the load balancer, I see that getLoadBalancerAdditionalTags() method is again being called [12] which explains why the CLB was able to update the tags and not NLB when changes were made to the annotation.
{
    // Add additional tags
    klog.V(2).Infof("Creating additional load balancer tags for %s", loadBalancerName)
    tags := getLoadBalancerAdditionalTags(annotations)
    if len(tags) > 0 {
        err := c.addLoadBalancerTags(loadBalancerName, tags)
        if err != nil {
            return nil, fmt.Errorf("unable to create additional load balancer tags: %v", err)
        }
    }
}
========================
References [1] https://github.com/kubernetes/kubernetes/blob/4fb7ed12476d57b8437ada90b4f93b17ffaeed99/staging/src/k8s.io/legacy-cloud-providers/aws/aws_loadbalancer.go#L117 [2] https://github.com/kubernetes/kubernetes/blob/4fb7ed12476d57b8437ada90b4f93b17ffaeed99/staging/src/k8s.io/legacy-cloud-providers/aws/aws_loadbalancer.go#L126 [3] https://github.com/kubernetes/kubernetes/blob/4fb7ed12476d57b8437ada90b4f93b17ffaeed99/staging/src/k8s.io/legacy-cloud-providers/aws/aws_loadbalancer.go#L154 [4] https://github.com/kubernetes/kubernetes/blob/4fb7ed12476d57b8437ada90b4f93b17ffaeed99/staging/src/k8s.io/legacy-cloud-providers/aws/aws_loadbalancer.go#L160 [5] https://github.com/kubernetes/kubernetes/blob/4fb7ed12476d57b8437ada90b4f93b17ffaeed99/staging/src/k8s.io/legacy-cloud-providers/aws/aws_loadbalancer.go#L131-L173 [6] https://github.com/kubernetes/kubernetes/blob/4fb7ed12476d57b8437ada90b4f93b17ffaeed99/staging/src/k8s.io/legacy-cloud-providers/aws/aws_loadbalancer.go#L174-L354 [7] https://github.com/kubernetes/kubernetes/blob/4fb7ed12476d57b8437ada90b4f93b17ffaeed99/staging/src/k8s.io/legacy-cloud-providers/aws/aws_loadbalancer.go#L90 [8] https://github.com/kubernetes/kubernetes/blob/a6405451674428172956116b4c93710ac03ebddd/staging/src/k8s.io/legacy-cloud-providers/aws/aws.go#L188 [9] https://github.com/kubernetes/kubernetes/blob/4fb7ed12476d57b8437ada90b4f93b17ffaeed99/staging/src/k8s.io/legacy-cloud-providers/aws/aws_loadbalancer.go#L901 [10] https://github.com/kubernetes/kubernetes/blob/4fb7ed12476d57b8437ada90b4f93b17ffaeed99/staging/src/k8s.io/legacy-cloud-providers/aws/aws_loadbalancer.go#L934 [11] https://github.com/kubernetes/kubernetes/blob/4fb7ed12476d57b8437ada90b4f93b17ffaeed99/staging/src/k8s.io/legacy-cloud-providers/aws/aws_loadbalancer.go#L968-L1168 [12] https://github.com/kubernetes/kubernetes/blob/4fb7ed12476d57b8437ada90b4f93b17ffaeed99/staging/src/k8s.io/legacy-cloud-providers/aws/aws_loadbalancer.go#L1120-L1130
Additionally, I was able to notice that the tags are not being removed from the LoadBalancer during update to the Service or the annotation is removed. This one is important because eventually we reach the limit on number of tags for the load balancer and we would face new issues as a result
============== ***** Why tags are not removed for CLB and NLB when the annotation is deleted by updating the service *****
-- For NLB, it is straight forward, as the getLoadBalancerAdditionalTags() method is not being called during syncing of the changes there is no code to handle this scenario.
-- For CLB, I think the code is not taking into consideration the scenario of removing tags. As I do not see a logic to decide which tags needs to be removed or tags that needs to be added. The current logic is adding any tags that was defined on service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags without taking into consideration whether they exist or not.
========================
So, in my view, the listed below are the scenarios that are possible for service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags annotation.
| Annotation Specified | Annotation Specified | |
|---|---|---|
| During initial Service Creation | Yes | No | 
| During Update | Modified/Removed | Yes | 
The code we currently have for ensuring CLB handles the scenario when initially there is no annotation specified in service spec and later during update if the annotation is given (because length of tags is greater than zero), CLB will be updated with the given tags.
It also handles the scenario where initially we create a service of type CLB using annotation, and later if we update the service with additional tags, the tags are updated but however the given existing tags are not reconciled. For example if we initially use the below annotation during service creation:
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: env=test,name=test-q6pv,project=inquiry
All the above mentioned tags are added to the Classic Load Balancer. However if we update the service spec using the below annotation:
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: name=test-q6pv,project=inquiry,group=ADS
We can notice that only the tag group=ADS is added to the load balancer, but the tag which is removed from annotation env=test is not reflected in the console.
=====================
I think the above behavior can be explained because of the below code:
{
    // Add additional tags
    klog.V(2).Infof("Creating additional load balancer tags for %s", loadBalancerName)
    tags := getLoadBalancerAdditionalTags(annotations)
    if len(tags) > 0 {
        err := c.addLoadBalancerTags(loadBalancerName, tags)
        if err != nil {
            return nil, fmt.Errorf("unable to create additional load balancer tags: %v", err)
        }
    }
}
During update, we are just considering the length of new tags for service and then calling the addLoadBalancerTags() [1] method which in turn is calling the AddTags() method available for ELB interface [2]. We are not reconciling the difference to the tags and then updating the load balancer accordingly.
***** Summary and workarounds *****
-- For LoadBalancer of type NLB through kubernetes service, tags can currently be used only during creation time using service annotations and cannot be used while updating the service. So, any tags you wish to have for the service would have to be specified during creation of the service.
-- For LoadBalancer of type CLB, tags can be used both while creation and also during updates to the service by making use of annotations.
-- For both NLB and CLB, removing the annotation "service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags" from the service would not remove the already applied tags and they would have to be removed manually from the console after updating the service by removing the above annotation or they would be deleted when the service is deleted.
References [1] https://github.com/kubernetes/kubernetes/blob/a6405451674428172956116b4c93710ac03ebddd/staging/src/k8s.io/legacy-cloud-providers/aws/aws.go#L2900 [2] https://github.com/kubernetes/kubernetes/blob/a6405451674428172956116b4c93710ac03ebddd/staging/src/k8s.io/legacy-cloud-providers/aws/aws.go#L349
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
I'm expriencing the same issue. It would be great to fix this issue. I can't update the tags on my NLB using the annotations discussed on this issue. Surprised this has been opened since July 2020.
/remove-lifecycle rotten
@M00nF1sh @kishorj
I'm experiencing the same issue
Curious also if anyone notices that in the case of CLB, the tags propagate to the LB resource (expected), but do not propagate to auto-created SG resource (unexpected)? (seems like a bug, and I'm willing to open a separate issue, if that's the correct route).
Same for me on v1.19 EKS with NLBs
we are facing this issue with 1.20 as well
Same for metadata.annotations.service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout
We are experiencing the same thing :(. Changing/updating service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags on service is not updating tags of NLB on EKS 1.19.
/assign
Will check if this needs to be cherry-picked from upstream.
The current KCM or CCM doesn't modify the load balancer tags for NLB on service edits, NLB tags are configured during service creation only. CLB tags should work as expected.
You could configure the external AWS Load balancer contrtoller which will handle the tag updates correctly.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity, lifecycle/staleis applied
- After 30d of inactivity since lifecycle/stalewas applied,lifecycle/rottenis applied
- After 30d of inactivity since lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with /remove-lifecycle stale
- Mark this issue or PR as rotten with /lifecycle rotten
- Close this issue or PR with /close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
We're having a similar issue with 1.19 and clb.
First time the tags are applied correctly , but any further modifications are not synched to the load balancer, even though they are properly updated within the cluster resources (service and/or ingress).
I'm facing the problem with AWS EKS 1.21.5 and AWS Load Balancer Controller v2.2.1.
@izinovik, what does your manifest look like?
I'm deploying NGINX using custom Helm chart via terraform:
      deployments = {
        nginx = {
          replicas = 3
          service = {
            meta = {
              annotations = {
                "service.beta.kubernetes.io/aws-load-balancer-ssl-cert" = var.aws_acm_certificate_arn
                // It does not apply tags to existing NLBs
                "service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags" = module.this.aws_tags_string_map
                #: external-dns
                "external-dns.alpha.kubernetes.io/enabled"  = "true"
                "external-dns.alpha.kubernetes.io/hostname" = var.gitlab_hostname
              }
            }
          }
        }
      }
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity, lifecycle/staleis applied
- After 30d of inactivity since lifecycle/stalewas applied,lifecycle/rottenis applied
- After 30d of inactivity since lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with /remove-lifecycle stale
- Mark this issue or PR as rotten with /lifecycle rotten
- Close this issue or PR with /close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity, lifecycle/staleis applied
- After 30d of inactivity since lifecycle/stalewas applied,lifecycle/rottenis applied
- After 30d of inactivity since lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with /remove-lifecycle rotten
- Close this issue or PR with /close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity, lifecycle/staleis applied
- After 30d of inactivity since lifecycle/stalewas applied,lifecycle/rottenis applied
- After 30d of inactivity since lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with /reopen
- Mark this issue or PR as fresh with /remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
sad to see this closed by the Bot :(
@jrsdav: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@kishorj: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.