aws-load-balancer-controller
aws-load-balancer-controller copied to clipboard
Setting `aws-load-balancer-manage-backend-security-group-rules` to false is not working for Network Load Balancers
Describe the bug
When using AWS Load Balancer Controller for NLB setting annotation service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules: "false" won't disable adding SG rule to the backend EC2 SG
Here is the manifest for the service
loadBalancerSourceRanges:
- "10.0.0.0/8"
annotations:
service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules: "false"
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "ssl"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443" # Which port(s) to use for ssl. Otherwise it defaults to all
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=true
service.beta.kubernetes.io/aws-load-balancer-attributes: load_balancing.cross_zone.enabled=true
service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: "ELBSecurityPolicy-TLS-1-2-Ext-2018-06"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "80"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
Due to this issue https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/3460#issuecomment-1811372583 we are manually creating rules in backend EC2 SG and we want to disable LBC from adding rules
Even-though we have set service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules: "false" the LBC keep adding the rules to the backend EC2 SG.
Logs from LBC
{
"level": "info",
"ts": "2023-11-17T02:58:09Z",
"msg": "authorizing securityGroup ingress",
"securityGroupID": "<ec2-backend-sg>",
"permission": [
{
"FromPort": 80,
"IpProtocol": "tcp",
"IpRanges": null,
"Ipv6Ranges": null,
"PrefixListIds": null,
"ToPort": 443,
"UserIdGroupPairs": [
{
"Description": "elbv2.k8s.aws/targetGroupBinding=shared",
"GroupId": "<lb-backend-sg>",
"GroupName": null,
"PeeringStatus": null,
"UserId": null,
"VpcId": null,
"VpcPeeringConnectionId": null
}
]
}
]
}
We tried adding a custom --backend-security-group but the scenario is same as the auto generated backend security group. The LBC keep trying to add the rules to the EC2 backend SG even-though we have set service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules: "false"
Steps to reproduce Deploy a service with setting with following annotations
loadBalancerSourceRanges:
- "10.0.0.0/8"
annotations:
service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules: "false"
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "ssl"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443" # Which port(s) to use for ssl. Otherwise it defaults to all
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=true
service.beta.kubernetes.io/aws-load-balancer-attributes: load_balancing.cross_zone.enabled=true
service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: "ELBSecurityPolicy-TLS-1-2-Ext-2018-06"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "80"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
Expected outcome The LBC shouldn't add rules to the EC2 backend SG
Environment
- AWS Load Balancer controller version
2.6.2 - Kubernetes version
1.24 - Using EKS (yes/no), if so version?
eks.11
Additional Context: This was a workaround we were trying to overcome this issue https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/3460#issuecomment-1811372583
We provisioned our own backend SG and gave it with the flag --backend-security-group. But then we realiazed even with the auto generated backend SG, the annotation service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules: "false" doesn't disable adding rules to the EC2/Node SG
I tried different configurations on the documentation but there wasn't a configuration option to handle this. Any help on this matter will be highly appreciated. Thanks in advance
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
We are facing the same problem. We set this annotation aws-load-balancer-manage-backend-security-group-rules:false at k8s service level. We expect that controller will not manage the backend rules. But it is not working as expected. Upon creation and destruction of k8s service backend SG rule is getting created and removed.
Any solution to this yet?
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
I hit the same issue, I need different ACL for port 80 and port 443, I can't do that using service spec, but manual ACLs are reverted.
It seems the annotation works only for custom security group specifed according to docs https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.8/guide/service/annotations/#security-groups