cloud-provider-aws
cloud-provider-aws copied to clipboard
Introduce well-known tag for exclude subnets within a auto-discovery procedure for ELB backed services
What would you like to be added:
Another well known tag for subnets which will helps to exclude subnets and prevent its attachment to an ELB during auto-discovery procedure.
or
extend kubernetes.io/role/elb semantic and allow to specify kubernetes.io/role/elb=0 for subnets exclusion.
Why is this needed:
Currently subnets auto-discovery procedure for ELB relies on kubernetes.io/cluster/{clusterId} and/or kubernetes.io/role/elb tags, however it might be desirable to not attach subnets in certain zones (local zones, wavelength zones) but still keep kubernetes.io/cluster/{clusterId} for another automation purposes.
Some context (Openshift specific unfortunately): https://bugzilla.redhat.com/show_bug.cgi?id=2105337
/kind feature
we have discussed this issue at the SIG Cloud Provider meeting on 31 August 2022.
follow up questions,
- would this work need an enhancement to progress further?
- workaround might be specify load balancer on subnet, but this won't work necessarily in this situation.
/assign @kishorj /assign @nckturner
The auto-discovery excludes subnets not tagged for the current cluster but contains thekubernetes.io/cluster/{clusterId} tags for some other clusters. The role tag alone does not impact the auto-discovery, it helps determining the precedence in case there are multiple matches.
For the auto-discovery, we can restrict to the subnets of ZoneType availability-zone only since outpost, wavelength and local zone don't support NLB or CLB at the moment. This is a simple fix and will not depend on the end user applying correct tags to all of their subnets in the other zones.
/triage accepted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
I need to update patch after the kops changes was merged. But this issue is still actual.
/remove-lifecycle stale