aws-load-balancer-controller icon indicating copy to clipboard operation
aws-load-balancer-controller copied to clipboard

LoadBalancer Controller should not try to delete LB resource which is not created by itself

Open lqychange opened this issue 3 years ago • 2 comments

Is your feature request related to a problem?

Have project "A "deployed in EKS "A" with service type of "LoadBalancer", alone with LoadBalancer controller "A". The LB controller have provisioned an internal NLB, as well as a TargetGroup registered in this NLB as a listener. And for this project, a VPC Link also being provisioned alone with the NLB, and being used in API GW.

Now we have project "B" deployed in EKS "B", and have LoadBalancer controller "B" deployed on top of this EKS cluster as well. In project "B", we also have a service with type of "LoadBalancer", to re-use existing resources, we specific the NLB name which created from project "A" in the service manifest yaml, and now LB controller "B" in EKS "B" are only creating new TargetGroup and add a listener to it under existing NLB, and everything running fine.

The issue is, when I try to delete service from EKS "B" for Project "B", the deletion get stuck, and I saw the log from LB controller pods, that this LB controller trying to delete the NLB as well, which are not provisioned by this LB controller

Describe the solution you'd like LB controller should only deleting/deregister the resource provisioned by itself.

Describe alternatives you've considered No

lqychange avatar May 03 '22 13:05 lqychange

@lqychange, In the current design, we don't support using an externally created load balancer. You could label an existing load balancer with the appropriate label, and the controller picks up the load balancer, however, controller will not be able to determine whether the resource is external. Using an externally created load balancer is a feature request.

You've also mentioned EKS "A" and EKS "B" and I assume they are different kubernetes clusters. As of now, controller doesn't support load balancer across multiple clusters.

Would you be able to share how you made the controller re-use the load balancer, and also shared across multiple clusters. If you share across multiple clusters, the controllers in each cluster will be in a race condition updating the target groups.

kishorj avatar May 04 '22 22:05 kishorj

You could label an existing load balancer with the appropriate label, and the controller picks up the load balancer

Hi @kishorj , I'm wondering what labels are necessary for the LB to be picked up by the controller? is it these ones defined here? https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/ingress/annotations/#resource-tags

I added these tags to my existing LB, and then created an ingress using the stack name as the group name, but the controller deleted my deleteion-protected LB and created a new one..

junzebao avatar Jul 21 '22 11:07 junzebao

@junzebao @lqychange #228 try to upvote the feature request and maybe some day this will be available..

omrishilton avatar Oct 24 '22 14:10 omrishilton

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 22 '23 14:01 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Feb 21 '23 15:02 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Mar 23 '23 15:03 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Mar 23 '23 15:03 k8s-ci-robot