php
php copied to clipboard
PHP-FPM stderr logs cause nginx to return 502 errors to client
I have php-fpm running on docker with the following php.ini:
memory_limit = 512M
error_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT
display_errors = Off
display_startup_errors = Off
log_errors = On
ignore_repeated_errors = On
html_errors = Off
upload_max_filesize = 5M
post_max_size = 50M
max_input_vars = 5000
max_execution_time = 60
error_log = /var/log/php/error.log
log_level = error
catch_workers_output = yes
Everything worked and nginx was proxying perfectly. The problem with this is logs are never shown in the container's stdout, so I changed it to:
error_log = /proc/self/fd/2
which meant that I could see the logs with docker logs -f, but I started seeing these errors on nginx:
2019/08/26 17:29:35 [error] 22#0: *6 FastCGI sent in stderr: "PHP message: PHP Deprecated: Function create_function() is deprecated in
So some plugin in my wordpress (and I know which one it is), is using a deprecated function, but this error is non critical, its actually a notice. The problem is that with this setup, I keep getting 502 Bad Gateway errors when a client requests a page. I actually also see that php-fpm is returning them via the fast cgi protocol, when I connect do it directly.
It all starts working when I set log_errors = Off. So I read a bit and realized the fast-cgi protocol can return a FCGI_STDERR packet to its client, which includes error logs, so I decided not to go down that rabbit hole and try to cheat by doing this:
ln -sf /dev/stderr /var/log/php/error.log
to make php-fpm think it was logging to a file and still have the logs show up in docker logs. No such luch, 502 only disappear is I log to /var/log/php/error.log without any connection to stdout/stderr or if I disable logging altogether.
I don't know if its my poor understanding or if this is a bug, but there should be a way to log to stdout/stderr without returning errors to nginx.
Can anyone help?
Instead of /proc/self/fd/2
, try /proc/1/fd/2
(if that works, we should probably consider adjusting our default configuration too).
I am facing a similar issue, once too many notices come out of a request. We get a 502 from nginx.
I tried changing the following things: (in /usr/local/etc/php-fpm.d/docker.conf)
[global]
error_log = /proc/self/fd/2
To
[global]
error_log = /proc/1/fd/2
[www]
; if we send this to /proc/self/fd/1, it never appears
access.log = /proc/self/fd/2
To
[www]
; if we send this to /proc/self/fd/1, it never appears
access.log = /proc/1/fd/2
log_limit = 8192
To
log_limit = 1024
However, it did not solve the issue.
Switching log_limit
to 1024
fixed this issue for me. Make sure it's under [global]
.
I question why this is overridden by default? It seems this decision should not be forced on the user. nginx is very commonly used with php-fpm, and does not support this limit by default (only up to 2048
).
https://github.com/docker-library/php/pull/725#issuecomment-443540114
Thanks, I've seen this, but this is poor reasoning, in my opinion. In fact, he admits it wasn't changed in PHP due to backwards compatibility. I think it should keep the PHP default and you can choose to override if you want, not the other way around. At the very least it should be documented that this was bumped and you will need to take that into account when pairing with nginx, because literally everyone using nginx will run into this problem at some point.
@tomsisk I ran into this point today after updating 7.0 to 7.4 php_fpm based images :)
Seems not optimal to me to change log_limit
value during an image build
process...
I use PHP-FPM 7.4 Alpine 3.12 image in conjunction with nginx:1-alpine
image in one k8s pod, and they contact each other using the socket. After the pod started all going okay for a few hours but then I see nginx cannot connect to php-fpm anymore (error 500). In the logs of PHP-fpm container I see:
[preview-preview-756cfb8c75-lblcx preview] [18-Aug-2020 11:42:15] NOTICE: ready to handle connections
[preview-preview-756cfb8c75-lblcx preview] [18-Aug-2020 16:00:49] NOTICE: [pool www] child 13 exited with code 0 after 15514.912805 seconds from start
[preview-preview-756cfb8c75-lblcx preview] [18-Aug-2020 16:00:49] NOTICE: [pool www] child 16 started
[preview-preview-756cfb8c75-lblcx preview] [18-Aug-2020 16:00:59] NOTICE: [pool www] child 14 exited with code 0 after 15509.292689 seconds from start
[preview-preview-756cfb8c75-lblcx preview] [18-Aug-2020 16:00:59] NOTICE: [pool www] child 17 started
[preview-preview-756cfb8c75-lblcx preview] [18-Aug-2020 16:01:00] NOTICE: [pool www] child 15 exited with code 0 after 15508.987974 seconds from start
[preview-preview-756cfb8c75-lblcx preview] [18-Aug-2020 16:01:00] NOTICE: [pool www] child 18 started
I tried to fix /proc/self/fd/2
as /proc/1/fd/2
(in docker.conf
) and also set log_limit
to 1024, but it did not help.
@stepanselyuk, from some quick research, most users suggest that it may be because there is pm.max_request
in php.ini
. And I would guess that there could be other php ini settings that may cause them to exit as well.
We try to keep github issues focused on bugs in the image and not general deployment debugging. It likely seems not to be a problem with the image so I would suggest trying to post questions like this in the Docker Community Forums, the Docker Community Slack, or Stack Overflow.
Hi @yosifkit, thank you. Seems I found the underlying issue: I write logs and cache to /dev/shm (in local dev-containers) and forgot to change this logic. So after exactly ~4 hours the size of the /dev/shm reached 64M and that caused the issue.
I'm having same problem here php:7.4.9-fpm-alpine
does not output PHP logs to docker. I do not know what has changed, but 2 months ago built image was working (used php:7.4.3-fpm-alpine
that time)
After trying different things seems that outputting logs to /proc/1/fd/2
(instead of /proc/self/fd/2
) seems to work
so I added currently to my dockerfile
RUN sed -i 's/\/proc\/self\/fd\/2/\/proc\/1\/fd\/2/g' /usr/local/etc/php-fpm.d/docker.conf
My Docker version on Ubuntu 20.04
Client: Docker Engine - Community
Version: 19.03.11
API version: 1.40
Go version: go1.13.10
Git commit: 42e35e61f3
Built: Mon Jun 1 09:12:23 2020
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.11
API version: 1.40 (minimum version 1.12)
Go version: go1.13.10
Git commit: 42e35e61f3
Built: Mon Jun 1 09:10:55 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
Hello everyone 👋
TL;DR: set fastcgi.logging = Off
in your php.ini
Long explanation:
I know I'm late to the party, but this also means I've the advantage of having more google results available and I'm quite confident I found the real fix for what was reported by OP:
The problem with this is logs are never shown in the container's stdout, so I changed it to:
error_log = /proc/self/fd/2
which meant that I could see the logs with docker logs -f, but I started seeing these errors on nginx
It was suggested that the log_limit
"fixes" the problem, but this is only a accidental workaround which others also thought.
It is true, in case the fastcgi stderr channel contains data and is too big, nginx will act on this, this is the source code thereover where this happens in nginx https://github.com/nginx/nginx/blob/c9c3f2f005c43fd885bd78f86e1044261e639bea/src/http/modules/ngx_http_fastcgi_module.c#L1746-L1762
ngx_log_error(NGX_LOG_ERR, r->connection->log, 0,
"FastCGI sent in stderr: \"%*s\"", p - msg, msg);
flcf = ngx_http_get_module_loc_conf(r, ngx_http_fastcgi_module);
if (flcf->catch_stderr) {
pattern = flcf->catch_stderr->elts;
for (i = 0; i < flcf->catch_stderr->nelts; i++) {
if (ngx_strnstr(msg, (char *) pattern[i].data,
p - msg)
!= NULL)
{
return NGX_HTTP_UPSTREAM_INVALID_HEADER;
}
}
}
nginx has a setting to control this behaviour and it's called fastcgi_next_upstream
Syntax: fastcgi_next_upstream error | timeout | invalid_header | http_500 | http_503 | http_403 | http_404 | http_429 | non_idempotent | off ...; Default:
fastcgi_next_upstream error timeout;
Context: http, server, location
Specifies in which cases a request should be passed to the next server:
error an error occurred while establishing a connection with the server, passing a request to it, or reading the response header;
So theoretically this behaviour can also be mitigated by configuring nginx to not act that way on fastcgi errors being too large.
But the real answer is from this stackoverflow question:
I've been looking at the same issue, and after some trial and error, what worked for me is adding the below into the php-fpm pool configuration. I assume that the same flag could be set in other ways, such as the main php.ini file.
php_admin_flag[fastcgi.logging] = off
The relevant documentation is here: https://www.php.net/manual/en/ini.core.php#ini.fastcgi.logging
And this is IMHO the correct fix: set fastcgi.logging
off in your php.ini
Turns on SAPI logging when using FastCGI. Default is to enable logging
What happens is:
- usually everyone has
error_log = On
and, in container environments, sends it to/dev/stdout
or similar - The default of
fastcgi.logging
beingOn
means this is additionally send upstream, to nginx for many it seems - but since people also run with another php-fpm specific setting often called
catch_workers_output = yes
, theerror_log
to stdout in PHP is already good/enough
That's why people also see the PHP errors (warnings, notices, etc.) twice:
- once in your php-fpm container
- twice in your nginx container
Disabling does the desired thing:
- you still have your PHP error logs in php-fpm
- no logs in nginx in this regard anymore
In my case I was processing a GraphQL call which triggered an exception; the exception was caught inside the framework correctly and turned into a proper GraphQL JSON response. But the exception was also logged deliberately and what happen then is:
- backend sent back a HTTP 200 JSON payload with the details of the error (aka "expected behaviour")
- nginx saw the fastcgi stderr (also being too long, see the buffer workaround) and acted upon it, turning it into a HTTP 502 error and replacing/throwing away my actual application response
- frontend was confused because instead of a JSON body it got back HTML 💥
Personally (though I'm not an expert on this topic), since we're talking about the "official docker PHP" images here, I'd say it would make very much sense to either:
- disable this very setting by default or
- at least mention the setting in https://github.com/docker-library/docs/tree/master/php#configuration so ppl are more aware of it
HTH and have a nice day!
/proc/1/fd/2 work with php-fpm 8.2
@mfn Thank you very much! I've spent a lot of time looking for a solution. Your answer helped solve the problem
My two cents (not that they matter): I still think changing log_limit
back to the default is the real "fix", but obviously something that will cause another regression. The default php settings and default nginx settings work together, it wasn't until the container overwrote the default that there was an issue. Again, there was a reason why the PHP team didn't change the default value in the first place (backwards compatibility). For an "official" container, it should just roll with the defaults.
It comes down to changing another default could potentially create other problems we don't know about yet, just like this one. It shouldn't be assumed that someone is working with specific technologies, it should just work like if I installed PHP on my PC.
All that said, a note on this issue, even just linking here I think is crucial. I know I spent many hours trying to figure this out years ago, and there are still people doing the same thing.
I've finally implemented https://github.com/docker-library/php/issues/878#issuecomment-938595965 over in #1360, but I don't plan to let it merge until at least the new year (I've had a tab open to this issue for over a year now, intending to implement something :weary:).