cloud-provider-aws icon indicating copy to clipboard operation
cloud-provider-aws copied to clipboard

Adding tag annotations on service manifest for NLB does not trigger update on AWS

Open HoJinKind opened this issue 5 years ago • 31 comments

BUG REPORT: /kind bug

What happened: Adding this annotation to an existing service: {"metadata":{"annotations":{"service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags":"team=devops,env=qa"}}} did not trigger an update on AWS ELB (no new tags) if it is a NLB.

What you expected to happen: Expected to see those 2 new tags under the existing ELB: team=devops,env=qa

How to reproduce it (as minimally and precisely as possible):

Create a service without annotation Add the annotation {"metadata":{"annotations":{"service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags":"team=devops,env=qa"}}} No tags on AWS Anything else we need to know?: Creating a new service WITH the annotation will create the expected tags on AWS.

Environment:

Kubernetes version (use kubectl version): 1.18.3 Cloud provider or hardware configuration**: AWS OS (e.g. from /etc/os-release): Debian GNU/Linux 8 (jessie) Install tools: Others: Using Kops

HoJinKind avatar Jul 21 '20 07:07 HoJinKind

@HoJinKind: The label(s) sig/aws cannot be applied, because the repository doesn't have them

In response to this:

BUG REPORT: /kind bug /sig aws

What happened: Adding this annotation to an existing service: {"metadata":{"annotations":{"service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags":"team=devops,env=qa"}}} did not trigger an update on AWS ELB (no new tags) if it is a NLB.

What you expected to happen: Expected to see those 2 new tags under the existing ELB: team=devops,env=qa

How to reproduce it (as minimally and precisely as possible):

Create a service without annotation Add the annotation {"metadata":{"annotations":{"service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags":"team=devops,env=qa"}}} No tags on AWS Anything else we need to know?: Creating a new service WITH the annotation will create the expected tags on AWS.

Environment:

Kubernetes version (use kubectl version): 1.18.3 Cloud provider or hardware configuration**: AWS OS (e.g. from /etc/os-release): Debian GNU/Linux 8 (jessie) Install tools: Others: Using Kops

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Jul 21 '20 07:07 k8s-ci-robot

Hello, I also faced the same issue where tags are not being updated for the service when using the annotation aws-load-balancer-additional-resource-tags if it is NLB. However they are being added to the Loadbalancer if it is CLB.

So, I looked into the code responsible for ensuring an NLB through service spec of type LoadBalancer. Firstly, method responsible for ensuring a NLB is ensureLoadbalancerv2() method [1]. If we notice here [2], before the load balancer is being provisioned, it fetches the tags from annotations using getLoadBalancerAdditionalTags() method and adds them to part of createRequest [3] inside if block which is used finally to create a load balancer [4].

Note that the execution comes into the if block [5] only if the condition (if loadBalancer == nil) is met, which indicates only the first time when a new load balancer is being provisioned.

However if the load balancer is already created, I do not see any action to apply added or removed tags that can be decided by simply comparing the output of describe tags on the NLB and getLoadBalancerAdditionalTags() is not being called from within the else block [6] which is responsible for syncing the changes if load balancer already exists.

Implementation of getLoadBalancerAdditionalTags() method can be found here [7] which is responsible for converting the comma separated list of key-value pairs in the ServiceAnnotationLoadBalancerAdditionalTags annotation and returning it as a map. ServiceAnnotationLoadBalancerAdditionalTags annotation is a constant for service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags [8]

This led me to believe that the reason on why the tags are not being updated is due to no underlying code to handle this scenario. Since the tag creation method is being called only during creation of load balancer for the first time, the workaround would be to create the new service which would work as all the tags are fetched by getLoadBalancerAdditionalTags() method.

======================= ***** Why tags are being updated for CLB classic load balancer *****

Now coming into the reason on why it is working for load balancer of type CLB, the method responsible for ensuring a CLB load balancer is ensureLoadbalancer()[9]. Then, getLoadBalancerAdditionalTags() method is being called [10] inside if block and then subsequently the createRequest and CreateLoadbalancer methods. This is responsible for adding tags to the load balancer during creation time.

However, in the else block [11] which is responsible for syncing changes to the load balancer, I see that getLoadBalancerAdditionalTags() method is again being called [12] which explains why the CLB was able to update the tags and not NLB when changes were made to the annotation.

{
    // Add additional tags
    klog.V(2).Infof("Creating additional load balancer tags for %s", loadBalancerName)
    tags := getLoadBalancerAdditionalTags(annotations)
    if len(tags) > 0 {
        err := c.addLoadBalancerTags(loadBalancerName, tags)
        if err != nil {
            return nil, fmt.Errorf("unable to create additional load balancer tags: %v", err)
        }
    }
}

========================

References [1] https://github.com/kubernetes/kubernetes/blob/4fb7ed12476d57b8437ada90b4f93b17ffaeed99/staging/src/k8s.io/legacy-cloud-providers/aws/aws_loadbalancer.go#L117 [2] https://github.com/kubernetes/kubernetes/blob/4fb7ed12476d57b8437ada90b4f93b17ffaeed99/staging/src/k8s.io/legacy-cloud-providers/aws/aws_loadbalancer.go#L126 [3] https://github.com/kubernetes/kubernetes/blob/4fb7ed12476d57b8437ada90b4f93b17ffaeed99/staging/src/k8s.io/legacy-cloud-providers/aws/aws_loadbalancer.go#L154 [4] https://github.com/kubernetes/kubernetes/blob/4fb7ed12476d57b8437ada90b4f93b17ffaeed99/staging/src/k8s.io/legacy-cloud-providers/aws/aws_loadbalancer.go#L160 [5] https://github.com/kubernetes/kubernetes/blob/4fb7ed12476d57b8437ada90b4f93b17ffaeed99/staging/src/k8s.io/legacy-cloud-providers/aws/aws_loadbalancer.go#L131-L173 [6] https://github.com/kubernetes/kubernetes/blob/4fb7ed12476d57b8437ada90b4f93b17ffaeed99/staging/src/k8s.io/legacy-cloud-providers/aws/aws_loadbalancer.go#L174-L354 [7] https://github.com/kubernetes/kubernetes/blob/4fb7ed12476d57b8437ada90b4f93b17ffaeed99/staging/src/k8s.io/legacy-cloud-providers/aws/aws_loadbalancer.go#L90 [8] https://github.com/kubernetes/kubernetes/blob/a6405451674428172956116b4c93710ac03ebddd/staging/src/k8s.io/legacy-cloud-providers/aws/aws.go#L188 [9] https://github.com/kubernetes/kubernetes/blob/4fb7ed12476d57b8437ada90b4f93b17ffaeed99/staging/src/k8s.io/legacy-cloud-providers/aws/aws_loadbalancer.go#L901 [10] https://github.com/kubernetes/kubernetes/blob/4fb7ed12476d57b8437ada90b4f93b17ffaeed99/staging/src/k8s.io/legacy-cloud-providers/aws/aws_loadbalancer.go#L934 [11] https://github.com/kubernetes/kubernetes/blob/4fb7ed12476d57b8437ada90b4f93b17ffaeed99/staging/src/k8s.io/legacy-cloud-providers/aws/aws_loadbalancer.go#L968-L1168 [12] https://github.com/kubernetes/kubernetes/blob/4fb7ed12476d57b8437ada90b4f93b17ffaeed99/staging/src/k8s.io/legacy-cloud-providers/aws/aws_loadbalancer.go#L1120-L1130

damerakd avatar Sep 01 '20 15:09 damerakd

Additionally, I was able to notice that the tags are not being removed from the LoadBalancer during update to the Service or the annotation is removed. This one is important because eventually we reach the limit on number of tags for the load balancer and we would face new issues as a result

============== ***** Why tags are not removed for CLB and NLB when the annotation is deleted by updating the service *****

-- For NLB, it is straight forward, as the getLoadBalancerAdditionalTags() method is not being called during syncing of the changes there is no code to handle this scenario.

-- For CLB, I think the code is not taking into consideration the scenario of removing tags. As I do not see a logic to decide which tags needs to be removed or tags that needs to be added. The current logic is adding any tags that was defined on service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags without taking into consideration whether they exist or not.

========================

So, in my view, the listed below are the scenarios that are possible for service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags annotation.


Annotation Specified Annotation Specified
During initial Service Creation Yes No
During Update Modified/Removed Yes

The code we currently have for ensuring CLB handles the scenario when initially there is no annotation specified in service spec and later during update if the annotation is given (because length of tags is greater than zero), CLB will be updated with the given tags.

It also handles the scenario where initially we create a service of type CLB using annotation, and later if we update the service with additional tags, the tags are updated but however the given existing tags are not reconciled. For example if we initially use the below annotation during service creation:

service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: env=test,name=test-q6pv,project=inquiry

All the above mentioned tags are added to the Classic Load Balancer. However if we update the service spec using the below annotation:

service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: name=test-q6pv,project=inquiry,group=ADS

We can notice that only the tag group=ADS is added to the load balancer, but the tag which is removed from annotation env=test is not reflected in the console.

=====================

I think the above behavior can be explained because of the below code:

{
    // Add additional tags
    klog.V(2).Infof("Creating additional load balancer tags for %s", loadBalancerName)
    tags := getLoadBalancerAdditionalTags(annotations)
    if len(tags) > 0 {
        err := c.addLoadBalancerTags(loadBalancerName, tags)
        if err != nil {
            return nil, fmt.Errorf("unable to create additional load balancer tags: %v", err)
        }
    }
}

During update, we are just considering the length of new tags for service and then calling the addLoadBalancerTags() [1] method which in turn is calling the AddTags() method available for ELB interface [2]. We are not reconciling the difference to the tags and then updating the load balancer accordingly.

***** Summary and workarounds *****

-- For LoadBalancer of type NLB through kubernetes service, tags can currently be used only during creation time using service annotations and cannot be used while updating the service. So, any tags you wish to have for the service would have to be specified during creation of the service.

-- For LoadBalancer of type CLB, tags can be used both while creation and also during updates to the service by making use of annotations.

-- For both NLB and CLB, removing the annotation "service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags" from the service would not remove the already applied tags and they would have to be removed manually from the console after updating the service by removing the above annotation or they would be deleted when the service is deleted.

References [1] https://github.com/kubernetes/kubernetes/blob/a6405451674428172956116b4c93710ac03ebddd/staging/src/k8s.io/legacy-cloud-providers/aws/aws.go#L2900 [2] https://github.com/kubernetes/kubernetes/blob/a6405451674428172956116b4c93710ac03ebddd/staging/src/k8s.io/legacy-cloud-providers/aws/aws.go#L349

damerakd avatar Sep 01 '20 15:09 damerakd

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Nov 30 '20 15:11 fejta-bot

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot avatar Dec 30 '20 16:12 fejta-bot

I'm expriencing the same issue. It would be great to fix this issue. I can't update the tags on my NLB using the annotations discussed on this issue. Surprised this has been opened since July 2020.

luisamador avatar Jan 21 '21 10:01 luisamador

/remove-lifecycle rotten

nckturner avatar Feb 08 '21 18:02 nckturner

@M00nF1sh @kishorj

nckturner avatar Feb 18 '21 23:02 nckturner

I'm experiencing the same issue

aSapien avatar Feb 22 '21 13:02 aSapien

Curious also if anyone notices that in the case of CLB, the tags propagate to the LB resource (expected), but do not propagate to auto-created SG resource (unexpected)? (seems like a bug, and I'm willing to open a separate issue, if that's the correct route).

zerog2k avatar Mar 22 '21 19:03 zerog2k

Same for me on v1.19 EKS with NLBs

calvinbui avatar Mar 30 '21 01:03 calvinbui

we are facing this issue with 1.20 as well

kuntalkumarbasu avatar Jun 21 '21 18:06 kuntalkumarbasu

Same for metadata.annotations.service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout

alanthssss avatar Jul 29 '21 06:07 alanthssss

We are experiencing the same thing :(. Changing/updating service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags on service is not updating tags of NLB on EKS 1.19.

Kyslik avatar Aug 11 '21 08:08 Kyslik

/assign

Will check if this needs to be cherry-picked from upstream.

nckturner avatar Aug 20 '21 16:08 nckturner

The current KCM or CCM doesn't modify the load balancer tags for NLB on service edits, NLB tags are configured during service creation only. CLB tags should work as expected.

You could configure the external AWS Load balancer contrtoller which will handle the tag updates correctly.

kishorj avatar Aug 26 '21 18:08 kishorj

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Nov 24 '21 18:11 k8s-triage-robot

/remove-lifecycle stale

bartoszcisek avatar Nov 25 '21 13:11 bartoszcisek

We're having a similar issue with 1.19 and clb.

First time the tags are applied correctly , but any further modifications are not synched to the load balancer, even though they are properly updated within the cluster resources (service and/or ingress).

loliveiracodacy avatar Dec 10 '21 16:12 loliveiracodacy

I'm facing the problem with AWS EKS 1.21.5 and AWS Load Balancer Controller v2.2.1.

izinovik avatar Feb 10 '22 11:02 izinovik

@izinovik, what does your manifest look like?

kishorj avatar Feb 10 '22 16:02 kishorj

I'm deploying NGINX using custom Helm chart via terraform:

      deployments = {
        nginx = {
          replicas = 3
          service = {
            meta = {
              annotations = {
                "service.beta.kubernetes.io/aws-load-balancer-ssl-cert" = var.aws_acm_certificate_arn
                // It does not apply tags to existing NLBs
                "service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags" = module.this.aws_tags_string_map

                #: external-dns
                "external-dns.alpha.kubernetes.io/enabled"  = "true"
                "external-dns.alpha.kubernetes.io/hostname" = var.gitlab_hostname
              }
            }
          }
        }
      }

izinovik avatar Feb 10 '22 18:02 izinovik

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar May 11 '22 18:05 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Jun 10 '22 19:06 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-triage-robot avatar Jul 10 '22 20:07 k8s-triage-robot

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Jul 10 '22 20:07 k8s-ci-robot

sad to see this closed by the Bot :(

primeroz avatar Aug 09 '22 13:08 primeroz

@jrsdav: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Aug 09 '22 20:08 k8s-ci-robot

/reopen

kishorj avatar Aug 09 '22 20:08 kishorj

@kishorj: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Aug 09 '22 20:08 k8s-ci-robot