aws-load-balancer-controller
aws-load-balancer-controller copied to clipboard
LoadBalancer Controller should not try to delete LB resource which is not created by itself
Is your feature request related to a problem?
Have project "A "deployed in EKS "A" with service type of "LoadBalancer", alone with LoadBalancer controller "A". The LB controller have provisioned an internal NLB, as well as a TargetGroup registered in this NLB as a listener. And for this project, a VPC Link also being provisioned alone with the NLB, and being used in API GW.
Now we have project "B" deployed in EKS "B", and have LoadBalancer controller "B" deployed on top of this EKS cluster as well. In project "B", we also have a service with type of "LoadBalancer", to re-use existing resources, we specific the NLB name which created from project "A" in the service manifest yaml, and now LB controller "B" in EKS "B" are only creating new TargetGroup and add a listener to it under existing NLB, and everything running fine.
The issue is, when I try to delete service from EKS "B" for Project "B", the deletion get stuck, and I saw the log from LB controller pods, that this LB controller trying to delete the NLB as well, which are not provisioned by this LB controller
Describe the solution you'd like LB controller should only deleting/deregister the resource provisioned by itself.
Describe alternatives you've considered No
@lqychange, In the current design, we don't support using an externally created load balancer. You could label an existing load balancer with the appropriate label, and the controller picks up the load balancer, however, controller will not be able to determine whether the resource is external. Using an externally created load balancer is a feature request.
You've also mentioned EKS "A" and EKS "B" and I assume they are different kubernetes clusters. As of now, controller doesn't support load balancer across multiple clusters.
Would you be able to share how you made the controller re-use the load balancer, and also shared across multiple clusters. If you share across multiple clusters, the controllers in each cluster will be in a race condition updating the target groups.
You could label an existing load balancer with the appropriate label, and the controller picks up the load balancer
Hi @kishorj , I'm wondering what labels are necessary for the LB to be picked up by the controller? is it these ones defined here? https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/ingress/annotations/#resource-tags
I added these tags to my existing LB, and then created an ingress using the stack name as the group name, but the controller deleted my deleteion-protected LB and created a new one..
@junzebao @lqychange #228 try to upvote the feature request and maybe some day this will be available..
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.