Clear global THREAD_CACHE in child after fork()
Consider the following bug:
-
trio.to_thread.run_sync()is called, creating a worker thread. The worker thread is left in the globalTHREAD_CACHE. - The Python process forks for some reason (perhaps via the
multiprocessingmodule) - The child process now calls
trio.to_thread.run_sync(). The globalTHREAD_CACHEstill contains a reference to the worker thread, so the child process thinks it has an idle worker thread, and tries to dispatch a task to it. However, the worker thread doesn't actually exist in the child process. Sotrio.to_thread.run_sync()hangs forever.
Because THREAD_CACHE is interpreter-global, this can happen even if the two Trio run loop are completely separate. For example, in a test suite, one test might call trio.to_thread.run_sync(), and then later a completely separate test might use multiprocessing to spawn a process that calls trio.to_thread.run_sync().
I think it should be fairly simple to fix this by using os.register_at_fork() to ensure THREAD_CACHE is cleared in the child whenever the interpreter forks.
There's some previous discussion regarding handling fork() in #1614, though a bunch of the concerns there probably aren't relevant anymore...
Note this can happen even if the fork isn't inside trio.run(). For example:
async def foo():
... do some async stuff ...
await trio.to_thread.run_sync(...)
... do some async stuff ...
trio.run(foo)
# we are now outside trio.run(), but the worker thread is still in THREAD_CACHE
os.fork()
# this is fine on the parent, but fails on the child, due to the bug
trio.run(foo)
My understanding is it's not really practical to support fork() inside trio.run(), but it generally should be fine to have fork() and trio.run() in the same program at different times -- except for this bug :)
Do you want to contribute a fix? Otherwise I might try to fix this (though I'm not very comfortable with things related to threads...)