ingress-nginx
ingress-nginx copied to clipboard
Nginx ingress controller Max Security Group Rules EKS
What happened:
What you expected to happen:
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
Kubernetes version (use kubectl version
): 1.22.9
Environment:
-
Cloud provider or hardware configuration: EKS
-
OS (e.g. from /etc/os-release): Amazon Linux 2
-
How was the ingress-nginx-controller installed: ingress-nginx-4.0.19
How to reproduce this issue: helm uninstall ingress-nginx -n Kube-system
ISSUE:
When installing the NLB Nginx Ingress, a service file gets created with ports 80 and 443. It also opens multiple security group rules on the worker node security group.
My current Nginx NLB is installed on three AZ (1a,1b,1c) and it creates the following SGs rules
PORT 80 --> AWS TARGETGROUP 32015 --> EKS NODE SG RULE 32015 10.10.0.0/23 AZ 1A Subnet PORT 80 --> AWS TARGETGROUP 32015 --> EKS NODE SG RULE 32015 10.10.0.2/23 AZ 1B Subnet PORT 80 --> AWS TARGETGROUP 32015 --> EKS NODE SG RULE 32015 10.10.0.4/23 AZ 1C Subnet PORT 80 --> AWS TARGETGROUP 32015 --> EKS NODE SG RULE 32015 0.0.0.0/0
It creates three rules per port per subnet and it added 0.0.0.0/0 at the end. The problem with this is that it passed the rule limits on worker node security groups and we cannot create more Nginx ingress because of this limitation.
Would be nice if each nginx ingress creates a rule and add to the EKS worker node group or just keep the 0.0.0.0/0 per port and removing the rest can also be helpful for now
@vmpowercli: This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/remove-kind bug Hi @vmpowercli , do you know if this problem is also visible, if you install using the deployment steps for EKS here https://kubernetes.github.io/ingress-nginx/deploy/#aws
@longwuyuan Trying it now and will let you know how it goes
@longwuyuan It's creating a health check per every subnet for every node "kubernetes.io/rule/nlb/health", this is the major reason for eating up the security rules on the worker node SG.
Is there any way we can specify not to use the EKS Worker node Security group?
Any help would be really appreciated
Ok will wait. Next, there are questions asked I'm issue template and you have not answered them. So there is nothing to analyze if you only type a description.
I suggest you answer all the questions asked. And put screenshots and commands with ouputs that show the problem.
Thanks, ; Long
On Wed, 10 Aug, 2022, 9:11 AM vmpowercli, @.***> wrote:
@longwuyuan https://github.com/longwuyuan Trying it now and will let you know how it goes
— Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress-nginx/issues/8917#issuecomment-1210122447, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWX7GXZWVVI552MHG3LVYMQGTANCNFSM56BZP5BA . You are receiving this because you were mentioned.Message ID: @.***>
We're experiencing the same issue. Are there any resolutions? Maybe an upgrade of the help will help?
We're experiencing the same issue. Are there any resolutions? Maybe an upgrade of the help will help?
Spent too much time for searching the fix but did not able to find a working method. Moved on with this controller and picked Nginx-ingress "https://github.com/nginxinc/kubernetes-ingress" "https://artifacthub.io/packages/helm/nginx/nginx-ingress".
- Use Classic Load Balancer instead of NLB
- Classic LB approach creates a single security group that encompasses all ports, requiring only one rule on the security group of the EKS cluster.
Let me know if you run into any issues, Happy to help