aws-load-balancer-controller
aws-load-balancer-controller copied to clipboard
Dry Run Mode
This is a Feature Request
Has a Dry Run mode been considered?
Use Case: We are currently working towards rolling out an upgrade from the legacy in-tree controller to this much improved controller but are finding a few issues in assuring ourselves that the migration will not cause downtime for in-use Load Balancers.
I have read through the source code and understand that so long as we tag the existing load balancer such that it is recognised by the tracking code in https://github.com/kubernetes-sigs/aws-load-balancer-controller/blob/main/pkg/deploy/tracking/provider.go then the controller will adopt it - and I have taken the precaution of removing the various Delete IAM grants from the ServiceAccount policy - however the controller still has the ability to cause downtime in a number of ways. For instance, I have seen it create a new target group and update the load balancer listeners to point at that instead.
In all of these cases I would sleep easier knowing that we had previewed the modifications that the controller planned to enact before we run them on a live environment.
@RoryCrispin Thanks for this feature request. How would you prefer to let the dry-run output the details? e.g. in logs to outline changes will be done?
~~I think for now, it would work if you remove all "write" IAM permissions assigned to the controller(except for write tags).~~
I just realized you were migrating from legacy in-tree controller
to this one. We don't support such migration and you need to recreate the service. The supported migration is migrate from old "aws-alb-ingress-controller" to this v2 version
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.