aws-load-balancer-controller
aws-load-balancer-controller copied to clipboard
Add additional settings to IngressClassParams
Is your feature request related to a problem?
As a provider of Kubernetes clusters to internal service teams, I want to configure ALB-level settings at the IngressClass level, preventing service teams from impacting other teams using the same IngressClass by misconfiguring annotations on their Ingress objects
Describe the solution you'd like
As a rule of thumb, there should be an IngressClassParams
field for any Ingress
annotation that has a MergeBehavior
of Exclusive
. This field should override/prevent the Ingress
level annotations.
There should probably be a way to explicitly configure the default behavior so the Ingress
annotations have no effect in the IngressClass
, or to state that no Ingress
annotations of Exclusive
MergeBehavior
have effect.
The fields most important to me are:
- subnets
- ssl-policy
- inbound-cidrs
Other Exclusive
fields that are missing from IngressClassParams
are:
- security-groups
- load-balancer-name
- customer-owned-ipv4-pool
- wafv2-acl-arn
- waf-acl-id
- shield-advanced-protection
Describe alternatives you've considered
Muddle through by configuring on a single Ingress
in the IngressClass
. It is extremely inconvenient.
/kind feature Some of the fields would imply an ingress group either explicit or implicit. For example: the load-balancer-name.
So perhaps api-validate that a group
is specified if any of those fields are set?
Which fields other than load-balancer-name
imply a group?
Which fields other than
load-balancer-name
imply a group?
I verified, this is the only one. With the load-balancer-name
, controller doesn't group multiple ingresses together, however, all ingresses will refer to the same ALB. This will result in a race condition.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
I would like to have additional parameters in the IngressClassParams as well. My use case asks for a subnets
setting. I can't update the tags on the subnets in the VPC, as those are supplied and provisioned by another team, and my team has no rights to update tags on the subnets (so I can't set the kubernetes.io/role/elb = 1
tag for auto-discovery).
Requiring the subnet ids in the annotations for each deployed ingress is cumbersome, people keep forgetting them, and we need to distribute the correct set of ids for each situation and account.
It would be a lot easier for me to define these as a default set of subnets on the controller which should be used if none are supplied (would cover 95% of the cases) or as a setting in the IngressClassParams
, which would be the easiest solution.
And after a bit of digging, I discovered that I was looking at an outdated set of documentation. As stated in https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.7/guide/ingress/ingress_class/#specsubnets it is perfectly possible to supply subnet ids or tags in the ingressclassparams
object.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.