multiprocessing-logging
multiprocessing-logging copied to clipboard
handle is closed error in Python 3.9.13
When using multiprocessing-logging
in tandem with some code that has a Pool
, I'm seeing a new error when upgrading from Python 3.9.12 to 3.9.13. The error messages are like:
File "D:\a\cellfinder\cellfinder\.tox\py39-\lib\site-packages\multiprocessing_logging.py", line 81, in _receive
record = self.queue.get(timeout=0.2)
File "C:\hostedtoolcache\windows\Python\3.9.13\x64\lib\multiprocessing\queues.py", line 113, in get
if not self._poll(timeout):
File "C:\hostedtoolcache\windows\Python\3.9.13\x64\lib\multiprocessing\connection.py", line 260, in poll
self._check_closed()
OSError: [WinError 6] The handle is invalid
File "C:\hostedtoolcache\windows\Python\3.9.13\x64\lib\multiprocessing\connection.py", line 335, in _poll
return bool(wait([self], timeout))
File "C:\hostedtoolcache\windows\Python\3.9.13\x64\lib\multiprocessing\connection.py", line 888, in wait
ov.cancel()
File "C:\hostedtoolcache\windows\Python\3.9.13\x64\lib\multiprocessing\connection.py", line 335, in _poll
return bool(wait([self], timeout))
File "D:\a\cellfinder\cellfinder\.tox\py39-\lib\site-packages\multiprocessing_logging.py", line 81, in _receive
record = self.queue.get(timeout=0.2)
OSError: [WinError 6] The handle is invalid
OSError: handle is closed
and they go on pretty much indefinitely because they're in an infinite while
loop in multiprocessing_logging.py
. I'm afraid I haven't had time to try and get a reproducible example together, but I can confirm this is definitely isn't an issue on Python 3.9.12 and is an issue on 3.9.13. Looking down the Python changelog, my best guess is it's https://github.com/python/cpython/pull/31913 that has caused this.
To fix this, perhpas the exception handling in the lines below needs to be updated to catch the OSError
being thrown?
https://github.com/jruere/multiprocessing-logging/blob/eaf924b10dcfe1360cfa6c8d9d5ed09945277cd3/multiprocessing_logging.py#L83-L90
Thank you for reporting this.
In the receive loop, it should probably raise in the catch for anything.
I'll try to reproduce in a test using that version of python.
Failed to reproduce, so far...
@dstansby Could you provide code to reproduce the problem.
I'm having a hard time reproducing https://github.com/python/cpython/pull/31913 as it happens on shutdown...
I'm managed to get an example down to:
import logging
from multiprocessing import Pool
import multiprocessing_logging
if __name__ == "__main__":
logger = logging.getLogger()
logger.setLevel(getattr(logging, 'DEBUG'))
fh = logging.FileHandler('./log.txt')
logger.addHandler(fh)
multiprocessing_logging.install_mp_handler()
logging.info('info message')
with Pool(8) as worker_pool:
pass
This doesn't print anything or error on Python 3.9.12, but on Python 3.9.13 raises:
Traceback (most recent call last):
File "/Users/dstansby/projects/napari/multiprocessing-logging/multiprocessing_logging.py", line 78, in _receive
if self._is_closed and self.queue.empty():
File "/Users/dstansby/mambaforge/envs/napari/lib/python3.9/multiprocessing/queues.py", line 129, in empty
return not self._poll()
File "/Users/dstansby/mambaforge/envs/napari/lib/python3.9/multiprocessing/connection.py", line 260, in poll
self._check_closed()
File "/Users/dstansby/mambaforge/envs/napari/lib/python3.9/multiprocessing/connection.py", line 141, in _check_closed
raise OSError("handle is closed")
OSError: handle is closed
It seems like all the lines in the example are above are needed to reproduce the error.
I am not sure if it's helpful, but the same issue arises when changing python version from 3.10.4 to 3.10.5. Reproduces nicely on the testcase of @dstansby too.
Same error with Python 3.10.5 :/
Visible here -> https://github.com/megalinter/megalinter/runs/7121014978?check_suite_focus=true#step:4:477
Everything was working perfectly fine with python 3.9.7
It appears that the fix for this bug is to either add OSError
to the list of error types on https://github.com/jruere/multiprocessing-logging/blob/3449c4e785bb61c99111c05e69c2dba0c2ab6cd1/multiprocessing_logging.py#L83
Or, move this line out of the except
statement: https://github.com/jruere/multiprocessing-logging/blob/3449c4e785bb61c99111c05e69c2dba0c2ab6cd1/multiprocessing_logging.py#L14
Not sure if this is the "correct" fix or a bandaid for the underlying issue, but it appears to work just fine for me. Since this issue only seems to crop up when the program is shutting down, I'm not too worried about a bad state.
@jruere I'd put in a PR but wasn't sure which solution was better and if you wanted to further investigate the underlying issue instead of just trapping the error.
Update to my previous post: My above fix will result in some messages getting lost if the program exits while things are still queued. I fixed that by adding a signal handler and calling the "close" method explicitly. I will create a PR eventually, but unfortunately I am traveling for the next week so it will have to wait until after then. I may also release my version separately just because I've deviated a bit from the underlying code.
Any update on this? Don't mean to rush but it's just annoying when running a dev server and every time I make a change in my code these errors go on for maybe 5-10 seconds before the server reloads and I can continue testing.
I'm sorry I've not been able to work on this project. I'm really swamped. :(
I got distracted with other things. I'll put in a PR tonight with my fix for review.
https://github.com/jruere/multiprocessing-logging/pull/57
Here's the pull mentioned by my other account. Probably not suitable for merging as-is, but at least it will give people an idea on what needs to be fixed for their version.
We're also facing this issue for static-deployer. We're running a pipe on a docker image based on alpine:3.16
that uses Python 3.10.5-r0.
The issue also happens in Python 3.9.13. I haven't tested other versions.
Any news about this issue ? It prevents me to upgrase base docker image of MegaLinter :(
So sorry, I've not had time to work on this.
Same issue on Python 3.10.8
@jruere Submitted a PR for this: https://github.com/jruere/multiprocessing-logging/pull/58 Thanks
#58 was merged. I'll release this week.
@jruere any news about when you plan to release the new version ? :)
@jruere Hi. Do you have any news when the new version will be released? )
I need to do something to be able to push to PyPI and then I'll release.
Sorry for the delay. I'll do it ASAP.
On 03/01/2023 09:11, Alex (Linus) Verchenko wrote:
@jruere https://github.com/jruere Hi. Do you have any news when the new version will be released? )
— Reply to this email directly, view it on GitHub https://github.com/jruere/multiprocessing-logging/issues/56#issuecomment-1369425325, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAJ5FRUV3GJDIOAROR2SXLLWQO7H5ANCNFSM5YNUR62A. You are receiving this because you were mentioned.Message ID: @.***>
@jruere in order to simplify the next releases (and save you time !), you may define a Github Action that will automatically publish to PyPI when you will declare a new multiprocessing-logging release on Github, all you need is a PyPI token that you can define in the repository secrets
Example here with github-dependents-info : -> https://github.com/nvuillam/github-dependents-info/blob/main/.github/workflows/release.yml
@jruere I totally understand the lack of time to maintain our Open-Source apps (I suffer from that myself), and that's the game to wait for maintainers availability, but I really need to have a delivery date for the new version because it prevents MegaLinter to upgrade python version (so some linters latest versions can not be used)
Please can you provide a date for the new version ? If you are not available soon that's of course totally ok, but I need to know it so I can use a workaround (that could be use multiprocessing-logging locally, or create a hard-fork that I'll maintain)
Many thanks for your reply :)
@jruere Hi. Do you have any news?
I'm so sorry for the delay. I'll try to get it done this week. :(
Happy to help with the maintenance of the project if needed.
I just confirmed that installing the master directly from Github fixed my issue https://github.com/jruere/multiprocessing-logging/issues/60
Poetry dependency line example if someone needs a quick fix:
multiprocessing-logging = { git = "https://github.com/jruere/multiprocessing-logging.git", rev = "bc18e092af78d95781c57e3dffbb183da6882031" }
Any news on that release @jruere? Thanks!
You don't need to wait for a Pypi release to install the new version
Just use pip install "git+https://github.com/jruere/multiprocessing-logging.git#egg=multiprocessing_logging"
For security reason I'd really need an official version instead of using a git repo url, which content can change anytime
For security reason I'd really need an official version instead of using a git repo url, which content can change anytime
You can fork the repo then, the wonders of open source software