aws-load-balancer-controller
aws-load-balancer-controller copied to clipboard
Keep albs even if the ingress has been deleted
Is your feature request related to a problem?
I have a scenario, which I need to remove my apps every evening and turn it back on every morning.
I'm removing my apps by using helm uninstall
, which removes all of the relevant resources for each app, including the Ingress resource.
The ALB ingress controller detects every evening that no Ingress resource exist and removes the ALB (as expected), and in the morning the ALB ingress controller detects that new Ingress resource has been created, it will create a new ALB, but with different DNS name.
Describe the solution you'd like
Ability to provide annotation
or something else that tells the ALB ingress controller to keep the ALB without removing it, so when I'll re-create the Ingress resource, it will use the existing load balancer balancer
notes:
- The ALB is dedicated to specific app, and can't be shared with other apps
- I want to keep the Ingress resourced managed by helm, so I want to prevent from using
helm.sh/resource-policy="keep"
i think we might be able support some thing similar to helm's helm.sh/resource-policy="keep" for this. But i'm interested on why would you like to do deployment like this 😄 as keep the ALB around without any backend app wastes money. Have you considered using a fixed custom domain name with route53?
@dsaydon90, you could use an ingress group with a "resident" ingress which doesn't get deleted and add/remove the app ingress to/from the group when needed. You can have separate group for each application. You can still deploy/uninstall your application helm chart as normal.
I agree with @M00nF1sh on the cost part, and the route53 option.
First, this can help me to be on the safe side, so if someone will ever remove ingress resource by mistake, the alb will still be there so I won't need to change my dns records accordingly.
Second, the best solution for my case is to change the dns record when new alb is created, but I can't due to permissions & security reasons.
So instead of creating a resident ingress beforehand, I'll have only 1 ingress resource to manage instead of 2.
I can close this issue if you think that my scenario is not reasonable
Perhaps this should be a field in IngressClassParams which specifies the ALB for the group should be retained despite the group having zero Ingresses? One would then manage the IngressClass and IngressClassParams outside of the application's Helm chart, possibly in a second chart that doesn't get removed.
@johngmyers good point to use IngressClassParams instead of annotations 😄 I'm also wondering whether we should keep all resources(ALB/security groups/Listener/TargetGroups) or just the ALB around.
/kind feature
For the Listener/TG, the question would be what the preferred behavior is to incoming connections. Should it ECONNREFUSED or return a 404? I suspect it should ECONNREFUSED, which means to delete the Listener.
Are the SGs likely to be referenced by resources not under control of LBC? If so, it might be worth keeping them around. Otherwise, it doesn't matter.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
We have a usecase for this. Any plans on picking this up?
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.