X-Forwarded headers not handled with Nginx + FPM
Describe your Issue
I run Nextcloud behind a Openstack Octavia Loadbalancer (with HTTPS termination), that send X-Forwarded-For, X-Forwarded-Port and X-Forwarded-Proto headers to the Nginx ingress controller.
The LB pool is configured to use the Proxy Protocol in order to preserve client source IP.
Natively, the chart does not allow me to pass these headers from Nginx to PHP-FPM, so I had to create a custom configuration.
Also, the documentation is wrong about trusted_proxies Nextcloud parameter. In our case, it should be equal to ['127.0.0.1'], since the Nextcloud instance and the Nginx webserver are running under the same pod, and since the trafic is forwarded to the PHP upstream on 127.0.0.1 too.
Logs and Errors
Under Nextcloud Administration Area > Admin Settings > Security, the recognized IP is something like ::ffff:10.42.X.X this is the address of the rke2-nginx-ingress-controller, with a helm configuration that is supposed to be correct (I think)
On the Nginx container logs, $http_x_forwarded_for log value is correct, but not transmitted to PHP.
Describe your Environment
-
Kubernetes distribution: RKE2 v1.31
-
Argo-CD Version : v2.13.1+af54ef8
-
Helm Version : v3.15.4+gfa9efb0
-
Helm Chart Version: 6.5.1
-
values.yaml: (issue non relative content is removed)
ingress:
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-headers: "X-Forwarded-For"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nextcloud:
configs:
proxy.config.php: |-
<?php
$CONFIG = array (
'trusted_proxies' => array('127.0.0.1'),
'forwarded_for_headers' => array('HTTP_X_FORWARDED_FOR')
);
nginx:
ipFamilies:
- IPv4
- IPv6
config:
default: false
custom: |-
map $http_x_forwarded_proto $fastcgi_https {
default "";
https "on";
}
...
server {
set_real_ip_from 10.42.0.0/16; # My pod IPv4 subnet
set_real_ip_from fd00:42::/56; # My pod IPv6 subnet
real_ip_header X-Forwarded-For;
real_ip_recursive on;
location ~ \.php(?:$|/) {
fastcgi_param HTTPS $fastcgi_https;
fastcgi_param SERVER_PORT $http_x_forwarded_port;
...
}
...
}
Additional context, if any
With this solution, you can pass all the headers that I mentionned. Working fine :) Creating conditions on the Helm chart, it would be great to implement these fixes.
Don't turn cors on for this.
In addition you need to ensure ingress-nginx is deployed with the following settings:
config:
use-forwarded-headers: "true"
compute-full-forwarded-for: "true"
Ah no - this is a bit nuanced, because of FastCGI.
Looks like we need something like this
fastcgi_param REMOTE_ADDR $http_x_forwarded_for;
However there are caveats when you have a full forwarded for chain as I have in my config because while I have other apps in my cluster that need this, fastcgi cannot handle a chain and needs to get one ip only. The comments on this SO post have context https://stackoverflow.com/a/36451158/2262288 - I am trying to see if the mentioned module is available in the NGINX container being used for this chart.
You do get close with this - but I'm not sure if the module you are trying to use is in the container.
EDIT: It appears my suspicion is correct, the module is not installed/enabled in the container:
/etc/nginx/modules # ls
ngx_http_geoip_module-debug.so ngx_http_xslt_filter_module-debug.so
ngx_http_geoip_module.so ngx_http_xslt_filter_module.so
ngx_http_image_filter_module-debug.so ngx_stream_geoip_module-debug.so
ngx_http_image_filter_module.so ngx_stream_geoip_module.so
ngx_http_js_module-debug.so ngx_stream_js_module-debug.so
ngx_http_js_module.so ngx_stream_js_module.so
It also looks like you forgot to set the following to disable the default config:
config:
default: false
Digging into options here.
Nope, I was wrong it actually is built into this image:
/etc/nginx/modules # nginx -V
nginx version: nginx/1.27.3
built by gcc 13.2.1 20240309 (Alpine 13.2.1_git20240309)
built with OpenSSL 3.3.0 9 Apr 2024 (running with OpenSSL 3.3.2 3 Sep 2024)
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --with-perl_modules_path=/usr/lib/perl5/vendor_perl --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-http_v3_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-Os -fstack-clash-protection -Wformat -Werror=format-security -fno-plt -g' --with-ld-opt='-Wl,--as-needed,-O1,--sort-common -Wl,-z,pack-relative-relocs'
Ok.. this is working for me - on my external facing ingress controller - but not my internal controller. The external controller receives traffic port forwarded from the router and is seeing the true IP, while the internal one is seeing the ingress-nginx controller IP for the internal controller - I am combing through the configs because this is either a "spot the difference" issue - or something different about port forwarded traffic vs local:
nginx:
enabled: true
config:
default: false
custom: |-
upstream php-handler {
server 127.0.0.1:9000;
}
map $http_x_forwarded_proto $fastcgi_https {
default "";
https "on";
}
server {
# Default, IPv4 only
listen 80;
# HSTS settings
# WARNING: Only add the preload option once you read about
# the consequences in https://hstspreload.org/. This option
# will add the domain to a hardcoded list that is shipped
# in all major browsers and getting removed from this list
# could take several months.
add_header Referrer-Policy "no-referrer" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Download-Options "noopen" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Permitted-Cross-Domain-Policies "none" always;
add_header X-Robots-Tag "noindex, nofollow" always;
add_header X-XSS-Protection "1; mode=block" always;
# set max upload size
client_max_body_size 10G;
fastcgi_buffers 64 4K;
# Enable gzip but do not remove ETag headers
gzip on;
gzip_vary on;
gzip_comp_level 4;
gzip_min_length 256;
gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;
# Pagespeed is not supported by Nextcloud, so if your server is built
# with the `ngx_pagespeed` module, uncomment this line to disable it.
#pagespeed off;
# Remove X-Powered-By, which is an information leak
fastcgi_hide_header X-Powered-By;
# Add .mjs as a file extension for javascript
# Either include it in the default mime.types list
# or include you can include that list explicitly and add the file extension
# only for Nextcloud like below:
include mime.types;
types {
text/javascript js mjs;
}
set_real_ip_from 10.2.0.0/16; # My pod IPv4 subnet
real_ip_header X-Forwarded-For;
real_ip_recursive on;
# Path to the root of your installation
root /var/www/html;
# Specify how to handle directories -- specifying `/index.php$request_uri`
# here as the fallback means that Nginx always exhibits the desired behaviour
# when a client requests a path that corresponds to a directory that exists
# on the server. In particular, if that directory contains an index.php file,
# that file is correctly served; if it doesn't, then the request is passed to
# the front-end controller. This consistent behaviour means that we don't need
# to specify custom rules for certain paths (e.g. images and other assets,
# `/updater`, `/ocm-provider`, `/ocs-provider`), and thus
# `try_files $uri $uri/ /index.php$request_uri`
# always provides the desired behaviour.
index index.php index.html /index.php$request_uri;
# Rule borrowed from `.htaccess` to handle Microsoft DAV clients
location = / {
if ( $http_user_agent ~ ^DavClnt ) {
return 302 /remote.php/webdav/$is_args$args;
}
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
# Make a regex exception for `/.well-known` so that clients can still
# access it despite the existence of the regex rule
# `location ~ /(\.|autotest|...)` which would otherwise handle requests
# for `/.well-known`.
location ^~ /.well-known {
# The following 6 rules are borrowed from `.htaccess`
location = /.well-known/carddav { return 301 /remote.php/dav/; }
location = /.well-known/caldav { return 301 /remote.php/dav/; }
# Anything else is dynamically handled by Nextcloud
location ^~ /.well-known { return 301 /index.php$uri; }
try_files $uri $uri/ =404;
}
# Rules borrowed from `.htaccess` to hide certain paths from clients
location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)(?:$|/) { return 404; }
location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) { return 404; }
# Ensure this block, which passes PHP files to the PHP process, is above the blocks
# which handle static assets (as seen below). If this block is not declared first,
# then Nginx will encounter an infinite rewriting loop when it prepends `/index.php`
# to the URI, resulting in a HTTP 500 error response.
location ~ \.php(?:$|/) {
# Required for legacy support
rewrite ^/(?!index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+|.+\/richdocumentscode(_arm64)?\/proxy) /index.php$request_uri;
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
set $path_info $fastcgi_path_info;
try_files $fastcgi_script_name =404;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $path_info;
#fastcgi_param HTTPS on;
fastcgi_param modHeadersAvailable true; # Avoid sending the security headers twice
fastcgi_param front_controller_active true; # Enable pretty urls
fastcgi_pass php-handler;
fastcgi_intercept_errors on;
fastcgi_request_buffering off;
fastcgi_param HTTPS $fastcgi_https;
fastcgi_param SERVER_PORT $http_x_forwarded_port;
}
location ~ \.(?:css|js|svg|gif)$ {
try_files $uri /index.php$request_uri;
expires 6M; # Cache-Control policy borrowed from `.htaccess`
access_log off; # Optional: Don't log access to assets
}
location ~ \.woff2?$ {
try_files $uri /index.php$request_uri;
expires 7d; # Cache-Control policy borrowed from `.htaccess`
access_log off; # Optional: Don't log access to assets
}
location / {
try_files $uri $uri/ /index.php$request_uri;
}
}
OK, i've crawled the LoadBalancers from Metal LB, through the ingress-nginx and external-ingress-nginx setups, to the two ingress objects - there are zero in-cluster differences. So something that gets sent by the router during port-forwarding is making the above work
Ok, the internal ingress-nginx needed externalTrafficPolicy: Local to make this work, while the external did not seem to. It appears that the external-ingress-nginx was on the same worker as the metallb-controller. I think this was a fluke. I set both to Local and it works consistently across both.
Hello,
fastcgi cannot handle a chain and needs to get one ip only
Exactly fastcgi_param REMOTE_ADDR $http_x_forwarded_for; is not a good solution as it can contain multiple IPs addresses.
The only best-practice as described by the official Nginx documentation is using ngx_http_realip_module. This is also the way the ingress controller uses to provide the client's source IP.
I investigated a lot on this subject before opening this case, and didn't found better "cleaner" ways, it seems clean for me.
In addition you need to ensure ingress-nginx is deployed with the following settings: compute-full-forwarded-for: "true"
I did't set compute-full-forwarded-for to true since I have a lot of various apps in my cluster and I don't know how they will handle it. Should I try ? I don't see how it can "improve" my setup..
config: default: false
You're right, this block is in my config but forgot to include it..
Ok, the internal ingress-nginx needed externalTrafficPolicy: Local to make this work
I also forgot to mention that I use ClusterIP since my loadbalancer is pre-configured behind my K8S cluster, and Kubernetes doesn't manage it. So I didn't faced the externalTrafficPolicy problem, but you're right Local is the correct setting (had the same question working on a MailU issue in another context). It is recommended in 99% cases by Kubernetes official doc to preserve client's source IP.
Merry Christmas and happy new year 🎄
Just added default: false in my issue to match my settings
I did't set compute-full-forwarded-for to true since I have a lot of various apps in my cluster and I don't know how they will handle it. Should I try ? I don't see how it can "improve" my setup..
This ensures that the upstream IPs are preserved through the setup. I've never had any apps not able to handle this, and if you do not have it, then ingress-nginx will only keep the IP that forwarded the traffic to it, and lose any client context in the forwarding chain.
Try it and see if it resolves the problem for you.
Ideally once we both have it solved then a PR should be opened so these custom configs are not needed.
Ah okay, after checking - nginx-fpm got the correct ip (and is viewable in the data/nextcloud.log) but the nginx container shows internals.
So we should change: https://github.com/nextcloud/helm/blob/main/charts/nextcloud/files/nginx.config.tpl
with
real_ip_header X-Forwarded-For;
real_ip_recursive on;
nice founding (and debugging)
Hmm, for me it looks like i got the correct logs without extra configuration:
10.42.0.1 - - [12/Mar/2025:19:46:57 +0000] "GET / HTTP/1.1" 302 5 "-" "curl/8.12.0" "<my-public-ip-from-x-forwarded>"
Highly dependent on your upstream proxy chaining. If the loadbalancer is on the same node as nextcloud you'll never see this issue.
If it is on another node, you'll see this but to resolve it this config change needs to be added AND you'll need to make sure the ingress pods are also getting the client id headers.
With Cilium i had to move to geneve tunneling mode with DSR active in addition to adding this config to prevent the need for local only routing as a workaround.
Hi everyone, I am running into the same problem. If I may ask, is the change @wrenix suggested still planned?
Sorry, no i could not reproduce it and it looks for me as an ingress-controller setup problem. So i will not work on this anymore sorry.
FWIW it looks like X-Forwarded-For is passed upstream just fine with default nginx config, but some checks, ie. allowed_admin_ranges still seem to check against remote_addr and not XFF.
I do not know the Option allowed_admin_ranges is it part of nextcloud? Should de open an Bug in the Server repo? (Or is an option in fpm or nginx)