ingress-nginx icon indicating copy to clipboard operation
ingress-nginx copied to clipboard

EKS Nginx Proxy Protocol Broken Header

Open dinukarajapaksha opened this issue 2 years ago • 36 comments

What happened:

Nginx Ingress controller gives a broken header error for the 443 port when the Proxy Protocol v2 is enabled with the AWS Load Balancer Controller. The services are accessible via ingress but have the following error log from the Nginx Ingress Controller. The proxy protocol is enabled from both Nginx config and AWS NLB

2023/02/17 09:23:36 [error] 384#384: *1781139 broken header: "" while reading PROXY protocol, client: <NLB-ZONE-1-IP>, server: 0.0.0.0:443
2023/02/17 09:23:36 [error] 384#384: *1781141 broken header: "" while reading PROXY protocol, client: <NLB-ZONE-2-IP>, server: 0.0.0.0:443
2023/02/17 09:23:36 [error] 384#384: *1781147 broken header: "" while reading PROXY protocol, client: <NLB-ZONE-3-IP>, server: 0.0.0.0:443

Originally we want to pass the client ip to the Nginx by enabling proxy protocol 2. But since the annotation service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*" is not working we achieved that using service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=true and turning off the proxy protocol from both NLB and Nginx Ingress controller. But we want to make sure that enabling proxy protocol is possible since if there is a requirement to pass any additional headers in a future requirement.

So once you enable the proxy protocol from both AWS NLB and Nginx Ingress side, the above broker header error log starts to appear. The configuration we used are stated below

What you expected to happen: Nginx Ingress controller should not give any error outputs

After enabling the proxy protocol 2 with the Nginx Ingress controller annotations the error starts

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):

-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v1.3.1
  Build:         92534fa2ae799b502882c8684db13a25cde68155
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.19.10

-------------------------------------------------------------------------------

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.0", GitCommit:"b46a3f887ca979b1a5d14fd39cb1af43e7e5d12d", GitTreeState:"clean", BuildDate:"2022-12-08T19:51:43Z", GoVersion:"go1.19.4", Compiler:"gc", Platform:"darwin/arm64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.14-eks-ffeb93d", GitCommit:"96e7d52c98a32f2b296ca7f19dc9346cf79915ba", GitTreeState:"clean", BuildDate:"2022-11-29T18:43:31Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: AWS EKS v1.23.7
  • Basic cluster related info:
  Client Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.0", GitCommit:"b46a3f887ca979b1a5d14fd39cb1af43e7e5d12d", GitTreeState:"clean", BuildDate:"2022-12-08T19:51:43Z", GoVersion:"go1.19.4", Compiler:"gc", Platform:"darwin/arm64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.14-eks-ffeb93d", GitCommit:"96e7d52c98a32f2b296ca7f19dc9346cf79915ba", GitTreeState:"clean", BuildDate:"2022-11-29T18:43:31Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"linux/amd64"}
ip-[IP-Removed].ec2.internal   Ready    <none>   9d    v1.23.7   [IP-Removed]   <none>        Ubuntu 20.04.5 LTS   5.15.0-1020-aws   containerd://1.5.9
ip-[IP-Removed].ec2.internal   Ready    <none>   9d    v1.23.7   [IP-Removed]   <none>        Ubuntu 20.04.5 LTS   5.15.0-1020-aws   containerd://1.5.9
ip-[IP-Removed].ec2.internal   Ready    <none>   9d    v1.23.7   [IP-Removed]   <none>        Ubuntu 20.04.5 LTS   5.15.0-1020-aws   containerd://1.5.9
  • How was the ingress-nginx-controller installed: Nginx Helm Chart version 4.2.5 https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx/4.2.5

  • Others: Nginx Ingress Helm chart annotations in the value file

controller:
  config:
      use-proxy-protocol: "true"
      use-forwarded-headers: "true"
      compute-full-forwarded-for: "false"
      enable-real-ip: "true"
service:
  enableHttp: true
      enableHttps: true
      loadBalancerSourceRanges:
        - "10.0.0.0/8"
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: cluster-name=<cluster-name>
        service.beta.kubernetes.io/aws-load-balancer-ssl-cert: <cert-arn>
        service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl
        service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
        service.beta.kubernetes.io/aws-load-balancer-scheme: internal
        service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: ELBSecurityPolicy-TLS-1-2-Ext-2018-06
        service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
        service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=true
        service.beta.kubernetes.io/aws-load-balancer-attributes: load_balancing.cross_zone.enabled=true
        service.beta.kubernetes.io/aws-load-balancer-type: external
        service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"

Anything else we need to know:

There is a similar issue on the Digital Ocean cloud provider under : https://github.com/kubernetes/ingress-nginx/issues/3996

Is there something I'm missing here? Up to now there is no service impact but the Nginx Ingress controller keep outputting the error log on the broken header. The client ip shown here is from the NLB IPs from the 3 availability zones. The AWS LoadBalancer has correctly provisioned the NLB according to the annotations

2023/02/17 09:23:36 [error] 384#384: *1781139 broken header: "" while reading PROXY protocol, client: <NLB-ZONE-1-IP>, server: 0.0.0.0:443
2023/02/17 09:23:36 [error] 384#384: *1781141 broken header: "" while reading PROXY protocol, client: <NLB-ZONE-2-IP>, server: 0.0.0.0:443
2023/02/17 09:23:36 [error] 384#384: *1781147 broken header: "" while reading PROXY protocol, client: <NLB-ZONE-3-IP>, server: 0.0.0.0:443

dinukarajapaksha avatar Feb 17 '23 09:02 dinukarajapaksha

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Feb 17 '23 09:02 k8s-ci-robot

We've never been able to get this working either with the combination of use-proxy-protocol: "true" and service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*" -- not entirely sure if it ever worked in the first place.

brsolomon-deloitte avatar Feb 17 '23 22:02 brsolomon-deloitte

For the uninitiated, it would help if there was a clear detailed elaboration of what problem is solved by enabling proxy-protocol v2.

@dinukarajapaksha Please click the new issue button and look at the questions asked in the issue template. For readers and contributors benefit, edit your desciption and add the answers to those questions.

longwuyuan avatar Feb 18 '23 03:02 longwuyuan

For the uninitiated, it would help if there was a clear detailed elaboration of what problem is solved by enabling proxy-protocol v2.

@dinukarajapaksha Please click the new issue button and look at the questions asked in the issue template. For readers and contributors benefit, edit your desciption and add the answers to those questions.

@longwuyuan I have edited the comment as well according to the following

Originally we want to pass the client ip to the Nginx by enabling proxy protocol 2. But since the annotation service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*" is not working we achieved that using service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=true and turning off the proxy protocol from both NLB and Nginx Ingress controller. But we want to make sure that enabling proxy protocol is possible since if there is a requirement to pass any additional headers in a future requirement

dinukarajapaksha avatar Feb 20 '23 09:02 dinukarajapaksha

@dinukarajapaksha at a very high level, setting the externalTrafficPolicy key to a value of Local, in the service created by the ingress-controller helps obtain the client-ip address. There are nuances to this but it will help if you comment on this as to why you did not choose that, if you already know that.

Passing headers is possible in part, to a large extent, as it is without the controller stripping the headers. Other options for headers are available and documented https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/ . So need to know how other options don't fir.

Reason for asking is to know why only you are bringing up this topic and not many others are reporting the same problem.

longwuyuan avatar Feb 20 '23 10:02 longwuyuan

@dinukarajapaksha at a very high level, setting the externalTrafficPolicy key to a value of Local, in the service created by the ingress-controller helps obtain the client-ip address. There are nuances to this but it will help if you comment on this as to why you did not choose that, if you already know that.

Passing headers is possible in part, to a large extent, as it is without the controller stripping the headers. Other options for headers are available and documented https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/ . So need to know how other options don't fir.

Reason for asking is to know why only you are bringing up this topic and not many others are reporting the same problem.

@longwuyuan Thanks for the info shared and I have gone through them. At this stage we want to make sure headers are passing from the NLB to the Nginx ingress controller. So just want to clarify below point

If we have preserved the client ip from any other method like service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=true, is there any additional client information we can retrieve by enabling proxy protocol v2? (Such as port information etc.)

dinukarajapaksha avatar Feb 20 '23 12:02 dinukarajapaksha

If you terminate on the controller, I don't think you can get the client port information. But I am not an expert so I will have to tcpdump and maybe read. Because the controller establishes a new connection to the backend and the connection from the real_client is terminated at the controller.

(1) Can you elaborate what problem you are trying to solve. (2) What do you want to achieve. (3) Have you dumped the raw packets on the backend pod (tcpdump), and searched for the info you want in that

At least no expected headers are dropped, while using a out-of-box config of the controller (without enabling anything like proxy-protocol-v2 etc etc)

longwuyuan avatar Feb 20 '23 13:02 longwuyuan

If you terminate on the controller, I don't think you can get the client port information. But I am not an expert so I will have to tcpdump and maybe read. Because the controller establishes a new connection to the backend and the connection from the real_client is terminated at the controller.

(1) Can you elaborate what problem you are trying to solve. (2) What do you want to achieve. (3) Have you dumped the raw packets on the backend pod (tcpdump), and searched for the info you want in that

At least the none of the headers are dropped in the out-of-box config of the controller (without enabling anything like proxy-protocol-v2 etc etc)

@longwuyuan

  1. We want get the real client IP from a request to the Nginx ingress log. By default it was showing AWS NLB IPs

  2. We resolved this by using a AWS LB controller and using the annotation service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=true. The Nginx Ingress controller was showing the correct client IP afterwards for the requests But since there is an additional proxy protocol option on both Nginx Ingress and NLB we thought the better approach is to enable it to make sure we are not dropping any headers from NLB to the Ingress Controller. Thats when this issue came. Please note that we have achived our primiary objective which is getting the real client ip from the service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=true. But want to check why the proxy protocol is not working. Our best guess is this broken header requests belong to the NLB health check requests on 443

  3. No

dinukarajapaksha avatar Feb 20 '23 13:02 dinukarajapaksha

@dinukarajapaksha thanks for the clarification.

I want to apologize for not being clear, so asking a question even though you must have answered it above.

"Proxy-Protocol-V2 is not working". Is this the question remaining to be answered ? If yes, I think I see a documentation link https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#proxy-protocol that hints you will enable proxy-protocol on the NLB (project recommends NLB) and that is what you have mentioned earlier.

So now please point me to your message post above or kindly copy/paste in a reply again as to what proxy-protocol-config you enabled on the NLB and found that the ingress-nginx controller logs/otherinfo showed that ingress-nginx controller does not work with proxy-protocol or does not work with proxy-protocol-v2.

There is lack of developer time currently so small tiny miniscule details will help triage and set a priority on this.

longwuyuan avatar Feb 20 '23 14:02 longwuyuan

@dinukarajapaksha thanks for the clarification.

I want to apologize for not being clear, so asking a question even though you must have answered it above.

"Proxy-Protocol-V2 is not working". Is this the question remaining to be answered ? If yes, I think I see a documentation link https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#proxy-protocol that hints you will enable proxy-protocol on the NLB (project recommends NLB) and that is what you have mentioned earlier.

So now please point me to your message post above or kindly copy/paste in a reply again as to what proxy-protocol-config you enabled on the NLB and found that the ingress-nginx controller logs/otherinfo showed that ingress-nginx controller does not work with proxy-protocol or does not work with proxy-protocol-v2.

There is lack of developer time currently so small tiny miniscule details will help triage and set a priority on this.

@longwuyuan Please refer to the configuration I have shared already.

controller:
  config:
      use-proxy-protocol: "true"
      use-forwarded-headers: "true"
      compute-full-forwarded-for: "false"
      enable-real-ip: "true"
service:
  enableHttp: true
      enableHttps: true
      loadBalancerSourceRanges:
        - "10.0.0.0/8"
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: cluster-name=<cluster-name>
        service.beta.kubernetes.io/aws-load-balancer-ssl-cert: <cert-arn>
        service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl
        service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
        service.beta.kubernetes.io/aws-load-balancer-scheme: internal
        service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: ELBSecurityPolicy-TLS-1-2-Ext-2018-06
        service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
        service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=true
        service.beta.kubernetes.io/aws-load-balancer-attributes: load_balancing.cross_zone.enabled=true
        service.beta.kubernetes.io/aws-load-balancer-type: external
        service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"

dinukarajapaksha avatar Feb 21 '23 10:02 dinukarajapaksha

What we believe is happening the TCP health checks requests from the NLB to the Nginx Inrgess Controller backend generates this error. But for the port 80 there is no error only for the 443

6341670 broken header: "" while reading PROXY protocol, client: <NLB-IP>, server: 0.0.0.0:443

dinukarajapaksha avatar Feb 22 '23 09:02 dinukarajapaksha

We had to disable the port 443 health check by adding service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "80" to NLB

dinukarajapaksha avatar Feb 23 '23 13:02 dinukarajapaksha

/remove-kind bug

longwuyuan avatar Feb 24 '23 04:02 longwuyuan

Is there a permanent solution for this? Right now we have temporarily disabled the health check for the TLS traffic port

dinukarajapaksha avatar Mar 04 '23 06:03 dinukarajapaksha

facing the exact same problem as @dinukarajapaksha and need same workaround of disabling health check for the TLS traffic port

joelpramos avatar Mar 17 '23 10:03 joelpramos

@dinukarajapaksha hei,i wanna ask you,how do u get the Real Client Ip behind NLB without enabling proxy_protocol,can you give me the details?,thank you

reyyzzy avatar Mar 21 '23 04:03 reyyzzy

@dinukarajapaksha hei,i wanna ask you,how do u get the Real Client Ip behind NLB without enabling proxy_protocol,can you give me the details?,thank you

This should work. But check with your other configs. This will get you the client ip to the Nginx. But you won't be able to use any additional features from the proxy config

controller:
service:
  enableHttp: true
      enableHttps: true
      loadBalancerSourceRanges:
        - "10.0.0.0/8"
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: cluster-name=<cluster-name>
        service.beta.kubernetes.io/aws-load-balancer-ssl-cert: <cert-arn>
        service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl
        service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
        service.beta.kubernetes.io/aws-load-balancer-scheme: internal
        service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: ELBSecurityPolicy-TLS-1-2-Ext-2018-06
        service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
        service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=true
        service.beta.kubernetes.io/aws-load-balancer-attributes: load_balancing.cross_zone.enabled=true
        service.beta.kubernetes.io/aws-load-balancer-type: external

dinukarajapaksha avatar Mar 21 '23 13:03 dinukarajapaksha

@longwuyuan I see you have removed the /remove-kind bug flag. Since multiple people having the same problem is there a workaround for this?

dinukarajapaksha avatar Apr 18 '23 13:04 dinukarajapaksha

@dinukarajapaksha i read the details once again now and what I see is that a lot of the data related to the live state of the objects in K8S and the live-state of the objects in AWS is not available. You have shown the environ as ;

  • ingress-nginx controller v1.3.1 but no kubectl describe ... output of the controller-pod
  • some text about annotations in use but no clear kubectl-describe ... output of the ingress object
  • a snippet of a values file but no clear helm -n ingress-nginx get values $releasename for values in use live
  • Not a single curl request that generates that error
  • No clear copy/paste of installing the controller with defaults, on a minikube or kind cluster, that shows the controller generating those error messages
  • No screenshots of the current live config of the AWS objects like LoadBalancer(s)

No without any real data clearly showing the hint of a bug, it implies that someone else has to guess your configuration and environment to even reproduce the bug. That is complicated so lets wait for someone to reproduce the problem and post the data or make comments related to what exactly is broken in the controller

I am able to use proxy-protocol and externalTrafficPolicy: Local so finding it hard to know that the bug is

longwuyuan avatar Apr 18 '23 14:04 longwuyuan

For anyone stumbling upon this issue, we had the same problem and the workaround described by @dinukarajapaksha seemed to get traffic flowing successfully.

## configmap
- use-proxy-protocol: "*"
## LB annotations
- service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*" 
+ service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=true

danquack avatar May 05 '23 13:05 danquack

I am getting the same issue. Same config is working with AWS EKS 1.21 , Nginx ingress helm chart version 4.0.1 and not working with AWS EKS 1.24 ,nginx ingress helm chart version 4.5.2.

bhanugarg23 avatar May 10 '23 10:05 bhanugarg23

Hey @dinukarajapaksha. When I try to provision NLB with service.beta.kubernetes.io/aws-load-balancer-type: external, the k8s service never provision load balancer. Did you install AWS Load Balancer controller together with ingress-nginx or it's work for you with ingress-nginx? My configuration which doesn't provision LB:

controller:
          replicaCount: 3
          service:
            enabled: true
            annotations:
              # Create external ELB
              service.beta.kubernetes.io/aws-load-balancer-type: external
              service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=true
              service.beta.kubernetes.io/aws-load-balancer-scheme: internal
              service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
              service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
              service.beta.kubernetes.io/aws-load-balancer-ssl-cert: cert
              service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
              service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
              external-dns.alpha.kubernetes.io/hostname: dev.hacken.cloud.
            # https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx#aws-l7-elb-with-ssl-termination
            # https://github.com/kubernetes/ingress-nginx/issues/918#issuecomment-327849334
            targetPorts:
              http: http
              https: http

This one works(k8s service spin up AWS NLB), but still not see real user IP in the NGINX logs

controller:
          replicaCount: 3
          service:
            enabled: true
            annotations:
              # Create external ELB
              service.beta.kubernetes.io/aws-load-balancer-type: nlb
              service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=true
              service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
              service.beta.kubernetes.io/aws-load-balancer-ssl-cert: cert
              service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
              service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
              external-dns.alpha.kubernetes.io/hostname: dev.hacken.cloud.
            # https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx#aws-l7-elb-with-ssl-termination
            # https://github.com/kubernetes/ingress-nginx/issues/918#issuecomment-327849334
            targetPorts:
              http: http
              https: http

insider89 avatar May 10 '23 13:05 insider89

service.beta.kubernetes.io/aws-load-balancer-type

@insider89 Check your AWS Loadbalancer Controller version and the annotation compatibility.Because there are some deprecated annotations with the latest versions. Provisioning of NLB is a work by Nginx Controller annotations

Here is the doc. Check with your AWS LB Controller version: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/service/nlb/

dinukarajapaksha avatar May 13 '23 15:05 dinukarajapaksha

@dinukarajapaksha I don't use AWS Loadbalancer Controller, only ingress-nginx. That was a question, do I need to install both of them?

insider89 avatar May 15 '23 07:05 insider89

@dinukarajapaksha I don't use AWS Loadbalancer Controller, only ingress-nginx. That was a question, do I need to install both of them?

Yes

dinukarajapaksha avatar May 15 '23 07:05 dinukarajapaksha

@dinukarajapaksha Could you please clarify one more thing. Why we need use ingress-nginx if we use AWS Loadbalancer Controller? I don't understand: we install AWS Loadbalancer Controller to allow ingress-nginx to use annotations from AWS Loadbalancer Controller? So when we install AWS Loadbalancer Controller we don't need to provision any LB(like providing empty annotation for this controller), and then in ingress-nginx use annotation from AWS Loadbalancer Controller? Still don't understand how they work together, and which controller should provision LB: AWS Loadbalancer Controller or ingress-nginx.

insider89 avatar May 15 '23 08:05 insider89

@dinukarajapaksha Could you please clarify one more thing. Why we need use ingress-nginx if we use AWS Loadbalancer Controller? I don't understand: we install AWS Loadbalancer Controller to allow ingress-nginx to use annotations from AWS Loadbalancer Controller? So when we install AWS Loadbalancer Controller we don't need to provision any LB(like providing empty annotation for this controller), and then in ingress-nginx use annotation from AWS Loadbalancer Controller? Still don't understand how they work together, and which controller should provision LB: AWS Loadbalancer Controller or ingress-nginx.

AWS LB Controller is only there to provision the NLB. So if you provision the nginx service without the AWS LB Controller then a AWS Classic Loadbalancer will provision. Your LB annotations on a service (In this case Nginx) won't work without the AWS LB Controller.

dinukarajapaksha avatar May 15 '23 14:05 dinukarajapaksha

@dinukarajapaksha I am using only ingress-nginx controller(helm chart 4.6.0) and able to setup NLB.

              # Create external ELB
              service.beta.kubernetes.io/aws-load-balancer-type: nlb
              service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
              service.beta.kubernetes.io/aws-load-balancer-ssl-cert: cert_arn
              service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
              service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"

But I am not able to enable proxy protocol 2 with ingress-nginx only.

insider89 avatar May 15 '23 14:05 insider89

We had the same issue on EKS 1.24 with ingress-nginx deployed with the official Helm chart with the below details:


NGINX Ingress controller Helm Chart: ingress-nginx-4.7.1 App Version: 1.8.1 Tag: https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.8.1 Repository: https://github.com/kubernetes/ingress-nginx nginx version: nginx/1.21.6


we solved the issue in this way:

deploy the helm chart with those values:

 config:
    entries:
      use-proxy-protocol: "true"

  service:
    type: LoadBalancer
    externalTrafficPolicy: "Local"
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
      service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '60'
      service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
      service.beta.kubernetes.io/aws-load-balancer-type: nlb      

Most important externalTrafficPolicy should be "Local" to preserve source IP on providers supporting it. AWS does in such case. Reference

amedeopalopoli avatar Jul 06 '23 21:07 amedeopalopoli

@amedeopalopoli why would you need externaltrafficpolicy local when using proxy protocol? If I understand well, proxy protocol is exactly to preserve the IP without using the local policy.

Please let me know if I misunderstood something.

MohammedNoureldin avatar Aug 12 '23 21:08 MohammedNoureldin