aws-load-balancer-controller
aws-load-balancer-controller copied to clipboard
Ingress cannot be deleted when TooManyUniqueTargetGroupsPerLoadBalancer
Describe the bug
We recently added a new ingress to our cluster, and it failed to deploy with this error:
Failed deploy model due to TooManyUniqueTargetGroupsPerLoadBalancer: You have reached the maximum number of unique target groups that you can associate with a load balancer of type 'application': [100] status code: 400, request id: ...
Steps to reproduce
Just keep adding new rules to your load balancer until you have over 100 target groups. Then try to remove the ingress that added the 101st target group. It won't delete, instead kubernetes will just hang waiting for the relevant finalizer to finish.
Expected outcome
I expect to be able to delete Ingresses even if they can't get created.
Environment
- AWS Load Balancer controller version: 2.5.1
- Kubernetes version: 1.28
- Using EKS (yes/no), if so version? yes, platform version: eks.7
Additional Context:
@jfly Have you tried removing the finalizer on your ingress and try to see if the controller then deletes this ingress for you? Also if possible could you please provide us the controller logs around the issue time so that we can look into improving this behaviour?
@shraddhabang, we did not, although I assume that would have gotten rid of the ingress. We instead "solved" this by removing some target groups from the load balancer so the controller could finish creating the ingress and then go on to remove it.
I don't have logs, sorry. I am pretty sure this is straightforward to reproduce, though.
Same issue here. Deleting the finalizer from ingress deleted the ingress, however, restart of aws-load-balancer-controller deployment was needed to get rid of the error.
AWS Load Balancer controller version: 2.4.7 Kubernetes version: 1.27 Using EKS (yes/no), if so version? yes, platform version: eks.7
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale