aws-load-balancer-controller
aws-load-balancer-controller copied to clipboard
Installing multiple instances of the ALB controller with different configuration and ingress class into the same namespace (kube-system)
Describe the bug
I am trying to install multiple, differently configured instances of the ALB controller into kube-system
namespace. It won't work because the first instance claims ownership of aws-load-balancer-tls
in the target namespace via meta.helm.sh/release-name
and prevents the second instance to do the same.
Steps to reproduce
values.yml
for the first instance:
clusterName: my-cluster
ingressClass: alb-a
watchNamespace: app
fullnameOverride: alb-a-controller
serviceAccount:
create: false
name: aws-load-balancer-controller
defaultTags:
ingressClass: alb-a
values.yml
for the second instance:
clusterName: my-cluster
ingressClass: alb-b
watchNamespace: app
fullnameOverride: alb-b-controller
serviceAccount:
create: false
name: aws-load-balancer-controller
defaultTags:
ingressClass: alb-b
Then the instances are deployed using:
helm upgrade -i alb-a-controller eks/aws-load-balancer-controller -n kube-system -f alb-a/values.yml
helm upgrade -i alb-b-controller eks/aws-load-balancer-controller -n kube-system -f alb-b/values.yml
Expected outcome
Both controller instances are deployed and each handles separate ingress class.
Actual outcome
Error: rendered manifests contain a resource that already exists.
Unable to continue with install: Secret "aws-load-balancer-tls" in
namespace "kube-system" exists and cannot be imported into
the current release: invalid ownership metadata; annotation
validation error: key "meta.helm.sh/release-name" must equal
"alb-b-controller": current value is "alb-a-controller"
Environment
- AWS Load Balancer controller version: 2.2
- Kubernetes version 1.21
- Using EKS (yes/no), if so version? 1.21 eks.2
Additional Context:
Additional information...
Installing alb-a
and alb-b
controllers into different namespaces doesn't work either. After installing alb-a
into infra-a
namespace (worked fine), while trying to install alb-b
into infra-b
namespace:
Error: rendered manifests contain a resource that already exists.
Unable to continue with install: MutatingWebhookConfiguration
"aws-load-balancer-webhook" in namespace "" exists and cannot
be imported into the current release: invalid ownership metadata;
annotation validation error: key "meta.helm.sh/release-name"
must equal "alb-b-controller": current value is "alb-a-controller";
annotation validation error: key "meta.helm.sh/release-namespace"
must equal "infra-b": current value is "infra-a"
It appears that the webhook is non-shareable and is installed into a different namespace from the controller?
@dnutels The current controller is designed to run as a single deployment, and we have updated our docs to reflect that: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/deploy/configurations/
what's your use case to run multiple deployments instead of a single one?
Thank you for clarifying, I somehow missed that one. It's somewhat academic.
The main use case it to be able to configure the controller differently for different namespaces/ingress classes. I realize that at this point most (but not all) of the controller configuration can be overridden on the Ingress level.
I would imagine that having different service accounts might be useful...
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Our use case is to have an alb-internal
and an alb-external
IngressClass, and then set the scheme of the ALB in the associated IngressClassParams
so that we don't have to annotate every single Ingress with ALB annotations.
At the moment the external ALB controller creates the stack for an external ingress and then the internal ALB controller removes it all again (I did think that was a bug but this issue makes it clear that it's more of an unimplemented feature)
/remove-lifecycle rotten
Another reason for wanting a separate ingress class is for use with external-dns
We use separate external-dns controllers, one that controls private DNS and one that controls public DNS, so that the controller knows which zones to manage records
We use annotation filters (and more likely ingress class filters soon) to associate a particular ingress class with a particular external-dns controller. Currently therefore we can only manage only one of public or private ALB ingresses
Yet a third reason is if you need a network load balancer and an application load balancer for different workloads (for example, doing TCP passthrough for one workload vs needing WAF protection for another workload)
Our use case is to have an
alb-internal
and analb-external
IngressClass, and then set the scheme of the ALB in the associatedIngressClassParams
so that we don't have to annotate every single Ingress with ALB annotations.At the moment the external ALB controller creates the stack for an external ingress and then the internal ALB controller removes it all again (I did think that was a bug but this issue makes it clear that it's more of an unimplemented feature)
We have this use case as well. I got the alb-ingress helm chart (v2.4.1) deployed twice by specifying nameOverride
and fullnameOverride
with a suffix. Only one of the controllers is doing all the work, because both deployments still share the same ConfigMap for leader election. Looks like this works for our ALB + EKS Fargate setup only. We'd hit #2185 otherwise.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
We are still here… /remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Still an issue. Confirming the use-case - public vs. internal ALBs - and the need for separate configuration.
In fact, instead of installing multiple controller instances a more elegant solution could probably be introducing support for multiple ingressClass
definitions handled by a single controller (helm release).
Thank you!
/remove-lifecycle stale
I'm having a similar issue. The inability to specify watchNamespace for multiple namespaces and the inability to create multiple deployments means that it is impossible to deploy load balancers for only 2 specific, externally facing namespaces.
Would like to echo the sentiments above, particularly for using this to manage public/private DNS alongside external-dns.
We currently work around this limitation with a script that temporarily removes the MutatingWebhookConfiguration and ValidatingWebhookConfiguration objects, creates TargetGroupBindings that the secondary aws-load-balancer-controllers use, then recreate the MutatingWebhookConfiguration and ValidatingWebhookConfiguration objects...
+1 for actual multiple aws-load-balancer-controller support. Also thank you @M00nF1sh and others who have continually improved this project!
would love to see this!
Hi,
we have EXACTLY this need, having and internal-lb and a external-lb controllers running, and then allow our users to choose the correct ingress class pointing to the correct LB.
We are using nginx-ingress controller and where evaluating moving to aws controller, which we thought would be really easy when we found this issue :-(
This is a total deal-breaker for us, and we will not be able to move forward to replace nginx-controller due to this.
Even worst is that on the oficial documentation it's mentioned AWS is working on this: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.6/deploy/configurations/ (see limitation warning last line on top)
But it then points to this issue that is closed: https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2185
How is this closed if it's not resolved? It should be a priority making this work, it makes no sense not being able to have multiple LB's configuration running.
Hope we get news about this soon.
I think the targetgroupbinding issue could be solved if the controller supported some sort of required annotation or label on the targetgroupbinding objects. That way it could easily determine which targetgroupbinding's go with which aws-load-balancer-controller installation
This seems a simple and nice approach - mentioned in here.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Chiming in that I would also benefit from the ability to serve multiple IngressClasses from one deployment for internal vs external purposes.
Amazing how this still seems a low priority ticket... and we are paying to use EKS... this is really sad :-(
I have the same internal/external ingressClass need. Would love to see some movement on this.
I have the same issue as well. I should be able to create both internal and external ingress. Any updates?
Still an issue. Confirming the use-case - public vs. internal ALBs - and the need for separate configuration.
In fact, instead of installing multiple controller instances a more elegant solution could probably be introducing support for multiple
ingressClass
definitions handled by a single controller (helm release).Thank you!
/remove-lifecycle stale
@dim-at-ocp Does the installation of multiple controller instances works? (e.g. having an internal ingress and an external ingress)
aws-lb-controller
instance#1 -> internal ingress (private alb)
aws-lb-controller
instance#2 -> external ingress (public alb)
Thank you!
I guess you still hit #2185 with that approach.