photonix
photonix copied to clipboard
epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too
Describe the bug This erratic behavior is seen while Photonix is very busy doing a full scan on the photo library. But given that the webui is responsive at this point, I treated it as an error though this could possibly be a resource limitation issue.
photonix | 2021/07/30 10:07:26 [info] 64#64: *1079 epoll_wait() reported that client prematurely closed connection, so upstream connection is
closed too while sending request to upstream, client: 10.42.0.24, server: , request: "GET /thumbnailer/photo/256x256_cover_q50/0e81d066-3a38-4fcd
-aa1f-fafbf8004e24/ HTTP/1.1", upstream: "http://127.0.0.1:8000/thumbnailer/photo/256x256_cover_q50/0e81d066-3a38-4fcd-aa1f-fafbf8004e24/", host:
"lenovo:8888", referrer: "http://lenovo:8888/"
photonix | 2021/07/30 10:07:26 [info] 64#64: *1118 epoll_wait() reported that client prematurely closed connection, so upstream connection is
closed too while sending request to upstream, client: 10.42.0.24, server: , request: "GET /thumbnailer/photo/256x256_cover_q50/120f39fe-502a-4a46-918c-670b8acefe37/ HTTP/1.1", upstream: "http://127.0.0.1:8000/thumbnailer/photo/256x256_cover_q50/120f39fe-502a-4a46-918c-670b8acefe37/", host:
"lenovo:8888", referrer: "http://lenovo:8888/"
photonix | 2021/07/30 10:07:26 [info] 64#64: *1069 epoll_wait() reported that client prematurely closed connection, so upstream connection is
closed too while sending request to upstream, client: 10.42.0.24, server: , request: "GET /thumbnailer/photo/256x256_cover_q50/068cbd97-bed9-41f6
-ad7d-cd4373d7422a/ HTTP/1.1", upstream: "http://127.0.0.1:8000/thumbnailer/photo/256x256_cover_q50/068cbd97-bed9-41f6-ad7d-cd4373d7422a/", host: "lenovo:8888", referrer: "http://lenovo:8888/"
photonix | 2021/07/30 10:07:26 [info] 64#64: *1114 epoll_wait() reported that client prematurely closed connection, so upstream connection is
closed too while sending request to upstream, client: 10.42.0.24, server: , request: "GET /thumbnailer/photo/256x256_cover_q50/16074127-f5a9-419b
-bdef-8c4dbc995d2d/ HTTP/1.1", upstream: "http://127.0.0.1:8000/thumbnailer/photo/256x256_cover_q50/16074127-f5a9-419b-bdef-8c4dbc995d2d/", host:
"lenovo:8888", referrer: "http://lenovo:8888/"
photonix | 2021/07/30 10:07:26 [info] 64#64: *992 epoll_wait() reported that client prematurely closed connection, so upstream connection is c
losed too while sending request to upstream, client: 10.42.0.24, server: , request: "GET /thumbnailer/photo/256x256_cover_q50/015bd231-ef2b-4883-85c9-2ec23545d380/ HTTP/1.1", upstream: "http://127.0.0.1:8000/thumbnailer/photo/256x256_cover_q50/015bd231-ef2b-4883-85c9-2ec23545d380/", host:
"lenovo:8888", referrer: "http://lenovo:8888/"
photonix | 2021/07/30 10:07:26 [info] 64#64: *1116 epoll_wait() reported that client prematurely closed connection, so upstream connection is
closed too while sending request to upstream, client: 10.42.0.24, server: , request: "GET /thumbnailer/photo/256x256_cover_q50/0ab487b9-5dfa-49cb-86f0-9f801e883bdf/ HTTP/1.1", upstream: "http://127.0.0.1:8000/thumbnailer/photo/256x256_cover_q50/0ab487b9-5dfa-49cb-86f0-9f801e883bdf/", host:
"lenovo:8888", referrer: "http://lenovo:8888/"
photonix | Cannot decode file /data/photos/Digikam_TO_BE_FILTERED/IMG20191231170900.jpg.xmp
photonix | 2021-07-30 10:07:26,575 INFO Finished raw process scheduling
photonix | 2021-07-30 10:07:26,577 ERROR File is not a supported type: /data/photos/Digikam_TO_BE_FILTERED/IMG20191231170900.jpg.xmp (appli
cation/rdf+xml)
photonix | 2021-07-30 10:07:26,578 INFO Recording photo /data/photos/Digikam_TO_BE_FILTERED/IMG20191231170900.xmp
photonix | Cannot decode file /data/photos/Digikam_TO_BE_FILTERED/IMG20191231170900.xmp
photonix | 2021-07-30 10:07:27,053 ERROR File is not a supported type: /data/photos/Digikam_TO_BE_FILTERED/IMG20191231170900.xmp (applicati
To Reproduce Steps to reproduce the behavior:
- Doing a full scan of photo collection (15k photos/videos)
- Open Photonix web ui
- Hit Browser refresh
- See error
Expected behavior No error should be reported.
Screenshots Bug console log above
Server (please complete the following information):
- Device: x86 machine with 8 GiB RAM
- OS: Debian Testing
- Architecture: x86_64
Desktop (please complete the following information):
- OS: Debian Testing
- Browser Chromium 90
- Version 90
There's also this message logged when refreshing from the web browser:
photonix | 2021/07/30 11:21:07 [warn] 73#73: *581 an upstream response is buffered to a temporary file /var/lib/nginx/proxy/8/00/0000000008 while reading upstream, client: 10.42.0.24, server: , request: "POST /graphql HTTP/1.1", upstream: "http://127.0.0.1:8000/graphql", host: "lenovo:8888
", referrer: "http://lenovo:8888/"
photonix | 2021/07/30 11:21:07 [warn] 66#66: *567 an upstream response is buffered to a temporary file /var/lib/nginx/proxy/9/00/0000000009 while reading upstream, client: 10.42.0.24, server: , request: "POST /graphql HTTP/1.1", upstream: "http://127.0.0.1:8000/graphql", host: "lenovo:8888
", referrer: "http://lenovo:8888/"
photonix | IMPORTED /data/photos/Digikam_TO_BE_FILTERED/IMG20191213101002.jpg
Thanks for reporting this @rickysarraf. I'll see if this is causing a problem after I've worked on fixing resource issues in this ticket: https://github.com/photonixapp/photonix/issues/83 .
That's not a bug really, Nginx showing these messages whenever the client (or client's JS library) closes the connection after the request was submitted (i.e. HTTP headers sent) but before the actual response was received. In my understanding, this was caused by the GraphQL library that could sometimes (due to an unknown reason for me) close the connection before actually getting the response.
I foresee this could be caused by navigation in frontend SPA so whenever the client didn't wait for all GraphQL requests to be processed and navigates to another page (i.e. starting another search query) this leads to abortion of previous yet to be processed requests. Taking into account project goals related to green buzz, there should be some event handler on a Python side that should somehow receive closed connection events from Gunicorn and stop the processing of aborted tasks to limit energy wasted on a CPU. And I'm not sure if Gunicorn supports this case.