bunkerweb
bunkerweb copied to clipboard
[BUG] Nginx spam log about `upstream server temporarily disable` and `connection refused`
Description
Since 1.2.7, nginx spam in logs about upstream server temporarily disable
and connection refused
on all of my subdomain. Everything is working find. I think it something about IPV6 listen port(all logs are about http://[::1]:XX
). I only use IPv4 on my server. Is there something we can do about this ?
How to reproduce Just add a subdomain and disable ipv6 on server.
Logs
2021/07/15 15:12:54 [error] 574#574: *202525 connect() failed (111: Connection refused) while connecting to upstream, client: X.X.X.X, server: gitlab.mydomain, request: "POST /api/v4/jobs/request HTTP/1.1", upstream: "http://[::1]:XXX/api/v4/jobs/request", host: "gitlab.mydomain"
2021/07/15 15:12:54 [warn] 574#574: *202525 upstream server temporarily disabled while connecting to upstream, client: X.X.X.X, server: gitlab.berlioz.me, request: "POST /api/v4/jobs/request HTTP/1.1", upstream: "http://[::1]:XXX/api/v4/jobs/request", host: "gitlab.mydomain"
Hey @thelittlefireman,
Very strange one because ::1
is the IPv6 loopback... I assume that you use reverse proxy mode ? What kind of data you have in REVERSE_PROXY_HOST_*
: IP, existing domains, docker service name, ... ?
That right i use reverse proxy mode and all of my multiple site are defined like this:
- gitlab.mydomain_REVERSE_PROXY_HOST=http://localhost:XXXX
Maybe this could help:
https://serverfault.com/questions/527317/disable-ipv6-in-nginx-proxy-pass
and add something like this ?
location / {
resolver 1.1.1.1 ipv6=off;
proxy_pass https://example.com;
}
This is strange because the resolver directive with the ipv6=off flag is already defined at http context (see here).
Since you are setting http://localhost:XXXX
, can you confirm that you're using the host networking mode ?
arf you're right for the resolver. My reverse proxy config doesn't change from the first version of bunkerized-nginx ^^. Yes I'm using the host networking mode for this container.
Until we find a solution maybe you can set http://127.0.0.1:XXXX
to force IPv4 as a workaround ?
Thanks :) @fl0ppy-d1sk Change all localhost to 127.0.0.1 seems to work... strange bug BTW ^-^
basically the "upstream" directive would be the right place like
upstream mybackend { server 172.18.0.5:8080 max_fails=0 fail_timeout=0;
} to not let nginx drown you backend on errors
since nginx' default behaviour will drop your upstream for a few sconds when your backend throws e.g. 502.. so you can at least avoid upstream server temporarily disable
..