porter-archive icon indicating copy to clipboard operation
porter-archive copied to clipboard

Client IP cannot be accessed from pods running in EKS

Open abelanger5 opened this issue 4 years ago • 7 comments

Location

  • [ ] Browser
  • [ ] CLI
  • [X] API

Details

In EKS, the internal IP address of the EC2 instance is set as the $remote_addr when forwarded from NGINX. From the NGINX ingress documentation (https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#source-ip-address), it states:

If the ingress controller is running in AWS we need to use the VPC IPv4 CIDR.

The issue is that AWS is replacing the X-Forwarded-For header with the IP address of the internal EC2 instance. AWS can optionally prepend a proxy protocol header to the TCP data containing the source IP address, which NGINX doesn't pass to upstream services by default.

Suggested Fix

If you modify the Helm values directly from the Porter dashboard for the nginx-ingress chart, the following will configure everything correctly on the EKS side:

controller:
  config:
    use-proxy-protocol: 'true'
  metrics:
    annotations:
      prometheus.io/port: '10254'
      prometheus.io/scrape: 'true'
    enabled: true
  podAnnotations:
    prometheus.io/port: '10254'
    prometheus.io/scrape: 'true'
  service:
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'

If you are using a custom domain, this should be enough to get the client IP headers. However, when forwarded from a *.porter.run domain, the source IP is replaced with a Porter IP address, so there are still changes required at the server level.

abelanger5 avatar May 05 '21 12:05 abelanger5

Trying out these changes resulted in lots of errors like:

2021/05/05 15:15:42 [error] 2987#2987: *12081097 broken header: "" while reading PROXY protocol, client: 10.0.5.109, server: 0.0.0.0:443

The original Helm chart that came with the nginx-ingress chart:

controller:
  metrics:
    annotations:
      prometheus.io/port: '10254'
      prometheus.io/scrape: 'true'
    enabled: true
  podAnnotations:
    prometheus.io/port: '10254'
    prometheus.io/scrape: 'true'
  service:
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-type: nlb-ip

evantahler avatar May 05 '21 15:05 evantahler

@evantahler my apologies, my specific version is using a classic load balancer instead of NLB, so the behavior may be different. I will have to test with the NLB and a custom domain. Did you try merging the nlb-ip annotation with the values that fixed it for me? For you, this would be:

controller:
  config:
    use-proxy-protocol: 'true'
  metrics:
    annotations:
      prometheus.io/port: '10254'
      prometheus.io/scrape: 'true'
    enabled: true
  podAnnotations:
    prometheus.io/port: '10254'
    prometheus.io/scrape: 'true'
  service:
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
      service.beta.kubernetes.io/aws-load-balancer-type: nlb-ip

abelanger5 avatar May 05 '21 16:05 abelanger5

@abelanger5 Thanks for the suggested fix - I'm using a custom domain, so hopefully it's going to work for my deployment. However, I'm not sure where/how it should be fixed in Porter dashboard. Could you please elaborate on that?

FeodorFitsner avatar May 05 '21 16:05 FeodorFitsner

@FeodorFitsner -- yep, go to the "Applications" tab and select "All" for the Filter. You should click on the chart called nginx-ingress and click on the "DevOps Mode" button. Then click on the "Helm Values" tab. You can then copy/paste the yaml from above (https://github.com/porter-dev/porter/issues/632#issuecomment-832812552) and click "Update Values". After a few seconds it should have reloaded, and you can see if the fix worked.

I still need to test this out with an NLB and custom domain, so if this fix doesn't work you can expand the "Revision" section and click "Revert" next to the first revision, which should revert to the default settings.

abelanger5 avatar May 05 '21 16:05 abelanger5

Replying to https://github.com/porter-dev/porter/issues/632#issuecomment-832812552

That configuration didn't work, and produced some really excellent proxy errors:

2021/05/05 19:07:43 [error] 3141#3141: *12252083 broken header: "" while reading PROXY protocol, client: 10.0.5.109, server: 0.0.0.0:443
2021/05/05 19:07:43 [error] 3141#3141: *12252086 broken header: "����{Ip��@`���T٧�?�B�@����^�? �����Y+]L��;�L�$^4rⓕ*�Of�-hW ���+�/�,�0̨̩����/" while reading PROXY protocol, client: 10.0.5.109, server: 0.0.0.0:443
2021/05/05 19:07:43 [error] 3140#3140: *12252087 broken header: "" while reading PROXY protocol, client: 10.0.5.109, server: 0.0.0.0:443
2021/05/05 19:07:44 [error] 3141#3141: *12252088 broken header: "���w}q��;l~��VF;�`f��yr��X ;��d8zBb���B��:��2.�VF��BPy"���+�/�,�0̨̩����/" while reading PROXY protocol, client: 10.0.5.109, server: 0.0.0.0:443
2021/05/05 19:07:44 [error] 3140#3140: *12252097 broken header: "" while reading PROXY protocol, client: 10.0.5.109, server: 0.0.0.0:443
2021/05/05 19:07:45 [error] 3141#3141: *12252098 broken header: "�����m���xɝ
���]��=�>�D��) ��g"Ê��υ(ҕ|5��E���'���f3q ���+�/�,�0̨̩����/" while reading PROXY protocol, client: 10.0.5.109, server: 0.0.0.0:443
2021/05/05 19:07:45 [error] 3140#3140: *12252099 broken header: "" while reading PROXY protocol, client: 10.0.5.109, server: 0.0.0.0:443
2021/05/05 19:07:45 [error] 3141#3141: *12252100 broken header: "�C�:�D���Y���[�����;Y�ږ���M1 �t)'l"c�
H��]��d�[���zK��"jj�+�/�,�0̨̩����/" while reading PROXY protocol, client: 10.0.5.109, server: 0.0.0.0:443

�':NP�� ���+�/�,�0̨̩����/" while reading PROXY protocol, client: 10.0.5.109, server: 0.0.0.0:443
2021/05/05 19:07:45 [error] 3140#3140: *12252110 broken header: "�g��23���ZMu�+��B�4+|�V

2021/05/05 19:07:45 [error] 3140#3140: *12252112 broken header: "" while reading PROXY protocol, client: 10.0.5.109, server: 0.0.0.0:443
2021/05/05 19:07:46 [error] 3141#3141: *12252116 broken header: "" while reading PROXY protocol, client: 10.0.5.109, server: 0.0.0.0:443

evantahler avatar May 05 '21 19:05 evantahler

OK, I've tried the fix (https://github.com/porter-dev/porter/issues/632#issuecomment-832812552) with a custom domain and initially got a lot of these errors on Status tab:

2021/05/05 19:40:03 [error] 930#930: *890201 broken header: "" while reading PROXY protocol, client: 10.0.4.29, server: 0.0.0.0:443
2021/05/05 19:40:05 [error] 929#929: *890229 broken header: "" while reading PROXY protocol, client: 10.0.4.29, server: 0.0.0.0:443
2021/05/05 19:40:06 [error] 929#929: *890230 broken header: "" while reading PROXY protocol, client: 10.0.4.29, server: 0.0.0.0:443
2021/05/05 19:40:06 [error] 930#930: *890244 broken header: "" while reading PROXY protocol, client: 10.0.4.29, server: 0.0.0.0:443
...

But, after refreshing the app in the browser those errors have gone and, success, I can see my real external IP in the application logs. So, yeah, the proposed solution works for a custom domain, thank you! :)

FeodorFitsner avatar May 05 '21 19:05 FeodorFitsner

I tried again and waited longer :D

After about 2 minutes, I stopped seeing the scary messages I linked in https://github.com/porter-dev/porter/issues/632#issuecomment-832939982 and I can confirm that things work now, with properly proxied IP addresses per the configuration in https://github.com/porter-dev/porter/issues/632#issuecomment-832812552

evantahler avatar May 06 '21 01:05 evantahler