aws-load-balancer-controller icon indicating copy to clipboard operation
aws-load-balancer-controller copied to clipboard

Support for referencing existing security groups for inbound traffic

Open vincentheet opened this issue 3 years ago • 5 comments

Is your feature request related to a problem? Currently it is possible to reference CIDR's using the alb.ingress.kubernetes.io/inbound-cidrs annotation. This is great for allowing traffic from the public IP space. But for internal ALB's it would be great to have the ability to allow inbound traffic from specific security groups. For example allowing traffic from the API Gateway (via VPC Link) to an ALB without opening up the ALB to the whole subnet or VPC. I would rather reference the security group of the VPC Link. But one can think of other use cases where a specific EC2 instance (not part of the EKS cluster) should be able to connect to an ALB and another EC2 should not.

Describe the solution you'd like I would like to have an annotation for example alb.ingress.kubernetes.io/inbound-security-groups where I can specify a list of security groups that are allowed to send traffic to the listeners of the ALB.

Describe alternatives you've considered Configure a custom security group using the alb.ingress.kubernetes.io/security-groups annotation. And reference the SG of the API Gateway in the SG to attach to the ALB. But this would be extra work since I then have to create a security group that would otherwise be managed by the load balancer controller.

vincentheet avatar Jun 13 '22 15:06 vincentheet

/kind feature

kishorj avatar Jun 24 '22 18:06 kishorj

Would using the security-groups annotation in combination with alb.ingress.kubernetes.io/manage-backend-security-group-rules: "true" solve the issue? This way the load balancer controller still manages the node/pod sg's which I think you mean. This was the problem I was having and I came across this post when searching for a solution. It looks like this option was added in 2.4

runningman avatar Jul 07 '22 13:07 runningman

Would using the security-groups annotation in combination with alb.ingress.kubernetes.io/manage-backend-security-group-rules: "true" solve the issue? This way the load balancer controller still manages the node/pod sg's which I think you mean. This was the problem I was having and I came across this post when searching for a solution. It looks like this option was added in 2.4

No it won't. When using both property's the loadbalancer controller creates a managed backend security group which is attached to the ALB. That security group is being referenced in the nodes security group as an inbound rule source. The security group that is configured and attached to the ALB based on the alb.ingress.kubernetes.io/security-groups annotation should be created and configured manually to allow either traffic from a CIDR or another security group.

This feature request focusses on not having to manage any security groups yourself. But by specifying with a new annotation e.g. alb.ingress.kubernetes.io/inbound-security-groups which incoming traffic source (sg) should be allowed to the ALB. That would make it easier to integrate an ALB with internal AWS resources that send traffic to an ALB and have attached a security group to themselves already. For example an EC2 Instance that wants to connect an internal ALB.

vincentheet avatar Jul 13 '22 08:07 vincentheet

got it, nice suggestion

runningman avatar Jul 13 '22 08:07 runningman

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Oct 11 '22 09:10 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Nov 10 '22 09:11 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Dec 10 '22 09:12 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Dec 10 '22 09:12 k8s-ci-robot