aws-load-balancer-controller
aws-load-balancer-controller copied to clipboard
exclude some namespace from mutatingwebhookadmission interception
Is your feature request related to a problem?
A description of what the problem is. For example: I'm frustrated when [...]
by using the helm chart
the mutatingwebhookadmission has rules for create service
which doesnt allow exclusion configuration, such as, exclude kube-system namespace. so that event alb mutatingwebhookadmission is not working properly, it wont impact resources in kube-system.
unless there is another way to prevent the impact? happy take advices. Describe the solution you'd like A description of what you want to happen.
Describe alternatives you've considered A description of any alternative solutions or features you've considered.
@zoezhangmattr, currently the namespace is the same with the helm chart release namespace. you mean you install the load balancer controller in kube-system
, but do not want the webhook service to be created in kube-system
? just for my understanding, how would the failure in LBC mutatingwebhookadmission affect other resources in the namespace? it should be scope down to specific resources only, right?
@zoezhangmattr, currently the namespace is the same with the helm chart release namespace. you mean you install the load balancer controller in
kube-system
, but do not want the webhook service to be created inkube-system
? just for my understanding, how would the failure in LBC mutatingwebhookadmission affect other resources in the namespace? it should be scope down to specific resources only, right?
hi, thanks for your reply
i installed the controller in non kube-system namespace, e.g. alb-controller namespace
the webhook without further configuration will be called, whenever a service is updated, for example, install an nginx release and expose the service, . that means, if alb controller is down, for example, cabundle is not correct, during the alb controller downtime, nginx service is not gonna installed, because the webhook is called,
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten