aws-load-balancer-controller
aws-load-balancer-controller copied to clipboard
Support for referencing existing security groups for inbound traffic
Is your feature request related to a problem?
Currently it is possible to reference CIDR's using the alb.ingress.kubernetes.io/inbound-cidrs annotation. This is great for allowing traffic from the public IP space. But for internal ALB's it would be great to have the ability to allow inbound traffic from specific security groups. For example allowing traffic from the API Gateway (via VPC Link) to an ALB without opening up the ALB to the whole subnet or VPC. I would rather reference the security group of the VPC Link. But one can think of other use cases where a specific EC2 instance (not part of the EKS cluster) should be able to connect to an ALB and another EC2 should not.
Describe the solution you'd like
I would like to have an annotation for example alb.ingress.kubernetes.io/inbound-security-groups where I can specify a list of security groups that are allowed to send traffic to the listeners of the ALB.
Describe alternatives you've considered
Configure a custom security group using the alb.ingress.kubernetes.io/security-groups annotation. And reference the SG of the API Gateway in the SG to attach to the ALB. But this would be extra work since I then have to create a security group that would otherwise be managed by the load balancer controller.
/kind feature
Would using the security-groups annotation in combination with alb.ingress.kubernetes.io/manage-backend-security-group-rules: "true" solve the issue? This way the load balancer controller still manages the node/pod sg's which I think you mean. This was the problem I was having and I came across this post when searching for a solution. It looks like this option was added in 2.4
Would using the
security-groupsannotation in combination withalb.ingress.kubernetes.io/manage-backend-security-group-rules: "true"solve the issue? This way the load balancer controller still manages the node/pod sg's which I think you mean. This was the problem I was having and I came across this post when searching for a solution. It looks like this option was added in 2.4
No it won't. When using both property's the loadbalancer controller creates a managed backend security group which is attached to the ALB. That security group is being referenced in the nodes security group as an inbound rule source.
The security group that is configured and attached to the ALB based on the alb.ingress.kubernetes.io/security-groups annotation should be created and configured manually to allow either traffic from a CIDR or another security group.
This feature request focusses on not having to manage any security groups yourself. But by specifying with a new annotation e.g. alb.ingress.kubernetes.io/inbound-security-groups which incoming traffic source (sg) should be allowed to the ALB. That would make it easier to integrate an ALB with internal AWS resources that send traffic to an ALB and have attached a security group to themselves already. For example an EC2 Instance that wants to connect an internal ALB.
got it, nice suggestion
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.