aws-load-balancer-controller icon indicating copy to clipboard operation
aws-load-balancer-controller copied to clipboard

I don´t want ALBC delete my Load Balancer

Open pierremartinsbr opened this issue 1 year ago • 6 comments

Describe the bug I have a configuration when the Load Balancer already exists and I just want ALBC to lookup this Network Load Balancer and configure the connection with the Service. In the service creation ALBC create the configuration with the Service properly, but when it´s happen I have two issues: 1 - ALBC change the Security Group in the Network Load Balancer; 2 - When I delete the Service ALBC also delete the Load Balancer.

Steps to reproduce Create a Service with this configuration below:

service.beta.kubernetes.io/aws-load-balancer-type: external service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules: "false" service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp" service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*" service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: proxy_protocol_v2.enabled=true

Tag the Load Balancer with this tags below:

elbv2.k8s.aws/cluster = mycluster service.k8s.aws/resource = LoadBalancer service.k8s.aws/stack = mynamespace/myservicename

Expected outcome ALBC should not change the Security Group and not delete the existed Load Balancer.

Environment

  • AWS Load Balancer controller version v2.8.1
  • Kubernetes version v1.30.2
  • Using EKS (yes/no), if so version? yes 1.28

Additional Context:

pierremartinsbr avatar Aug 30 '24 14:08 pierremartinsbr

Anyone can help me

pierremartinsbr avatar Sep 02 '24 22:09 pierremartinsbr

Hi, thanks for the question! AWS LBC does not support the behavior you describe.

However, the controller has a feature called Target Group Bindings that might meet your usecase: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/targetgroupbinding/targetgroupbinding/

This will allow you to provision the load balancer infrastructure completely outside of Kubernetes but still manage the targets with Kubernetes Service.

There are more details in the documentation for using LBC with an externally-managed load balancer: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/use_cases/self_managed_lb/

Please take a look and let us know if this meets your needs. If not, let us know more details so we can help find a solution :) Thank you!

andreybutenko avatar Sep 04 '24 21:09 andreybutenko

Hi @andreybutenko thanks for answer.

I tried the configuration in the link below, but without sucess. https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/use_cases/self_managed_lb/

In my case the LBC creates a new target-group, deleted the Network Load Balancer Security Group and creates a new one. I don´t want this behavior. I just want LBC resolves my service to my LB.

This is my TargetGroupBinding config:

apiVersion: v1 items:

  • apiVersion: elbv2.k8s.aws/v1beta1 kind: TargetGroupBinding metadata: creationTimestamp: "2024-09-25T20:01:59Z" finalizers:
    • elbv2.k8s.aws/resources generation: 1 labels: service.k8s.aws/stack-name: my-service-name service.k8s.aws/stack-namespace: my-namespace name: tgb-name namespace: my-namespace resourceVersion: "1962616" uid: f33827af-9f1a-4a3a-b4a7-bbabbf1d3e31 spec: ipAddressType: ipv4 serviceRef: name: my-service-name port: 443 targetGroupARN: arn:aws:elasticloadbalancing:us-east-1:xxxxxxxxx:targetgroup/xxxxxxxx-tg/f61ef1dcb3b94e96 targetType: instance vpcID: vpc-xxxxxxxxxxxxxx status: observedGeneration: 1 kind: List metadata: resourceVersion: ""

My LBC config:

service.beta.kubernetes.io/aws-load-balancer-type: external

Theres´s something I miss?

pierremartinsbr avatar Sep 25 '24 20:09 pierremartinsbr

/kind question

pierremartinsbr avatar Sep 26 '24 14:09 pierremartinsbr

I tried another configurations as described here https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/use_cases/self_managed_lb/ and here https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.8/guide/targetgroupbinding/spec/. But still not working. Someone can verify a posible bug? @k8s-ci-robot

pierremartinsbr avatar Oct 01 '24 01:10 pierremartinsbr

king/bug @k8s-ci-robot

pierremartinsbr avatar Oct 01 '24 01:10 pierremartinsbr

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Dec 30 '24 01:12 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Jan 29 '25 02:01 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Feb 28 '25 02:02 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Feb 28 '25 02:02 k8s-ci-robot