aws-load-balancer-controller
aws-load-balancer-controller copied to clipboard
Modify all SecurityGroups tagged with `kubernetes.io/cluster/$name`
Is your feature request related to a problem?
Currently, the controller fails if there are multiple security groups tagged with kubernetes.io/cluster/$name.
Describe the solution you'd like
It's common to tag both controlplane and dataplane resources with kubernetes.io/cluster/$name. This is a powerful discovery mechanism to simplify the end user configuration burden. Right now, the lb controller assumes that there is only one security group with this tag, and that it's owned by the control plane. This creates compatibility issues like https://github.com/weaveworks/eksctl/issues/4054.
Describe alternatives you've considered We could potentially use a different tag for the control plane security groups, but this may be challenging to coordinate with all providers (kops, capi, eks, etc)
/kind feature
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
@M00nF1sh any progress on prioritization here?
I am also getting this issue if we use default karpenter provisioner in a newly created eks cluster https://karpenter.sh/v0.7.1/getting-started/getting-started-with-eksctl/#provisioner
@armujahid, we made karpenter flexible to arbitrary tags, so you should be able to work around this. Still -- it would be great to have this feature in the lbc
@ellistarn yes that's what I have done by using another security group selector
@armujahid @ellistarn Can you tell me what security group selector that I can use to fix this? I also have a plain vanilla eksctl cluster running Karpenter.
@mkotsalainen i am using
kubernetes.io/cluster/${CLUSTER_NAME}: owned
Security group selector for my vanilla karpenter cluster
Ok thanks. Before I saw your answer I tried
provider:
securityGroupSelector:
Name: "*eks-cluster-sg-${CLUSTER_NAME}*"`
and that seems to fix my problem (getting traefik ingress controller to work with karpenter)
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
@M00nF1sh any word on this one?
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
It does not get fixed, right?
@hitsub2, we will address it in v2.5.0. /reopen
@kishorj: Reopened this issue.
In response to this:
@hitsub2, we will address it in v2.5.0. /reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@kishorj what is the ETA for 2.5.0?
+1 to the question: When is the 2.5.0 release expected?
I'm currently totally stuck. :-\
How have others gotten this working with the terraform eks module + load balancer controller?
What is the workaround for this issue when you have multiple PodSecurityGroups attached to a pod?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
This shouldn't be considered rotten or stale...it should remain open - as per the maintainers' feedback - until v2.5.0 is released.
My clusters ran into the same issue when we migrated over to Karpenter for node management (multiple SGs tagged with the cluster tag).
I opened #3147 to allow users to specify additional tags that the load balancer controller should look for when searching for the security group it should use.
My clusters are running with the changes in this PR and after specifying additional tags this seems to be running well. Hoping my PR can get some feedback and make it to a release soon as this seems to be affecting several folks 🙏
@carflo - Thanks for doing that (for the community)! I'll try and check out your changes to see if they work for us :)
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@k8s-ci-robot - NO PLEASE!
@kishorj can you reopen this? I have a PR that addresses this issue but I am awaiting feedback.
/reopen
@kishorj: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.