timeout-decorator icon indicating copy to clipboard operation
timeout-decorator copied to clipboard

Shared objects on multithreading

Open tgrandje opened this issue 2 years ago • 0 comments

I was a bit mistified to see that timeout-decorator doesn't work well with multithreading. I know this will seem quite obvious (now that I have considered what happened under the hood).

A simple example to detail my case, where you use a shared list of arguments between threads and "pop" from it :

import pebble
from timeout_decorator import timeout
from multiprocess import Manager

class Object():

    THREADS = 4
    def _thread_func(self):
        self.L = Manager().list(range(10))
        with pebble.ThreadPool(self.THREADS) as pool:
            future = pool.map(self.func_to_thread, self.L)
            pool.close()
            for k in future.result():
                print(k)

    @timeout(seconds=1, use_signals=False)
    def func_to_thread(self, _dummy):
        # _dummy is only there to force multithreading
        return self.L.pop()


my_obj = Object()
my_obj._thread_func()
print(my_obj.L)

If you execute that code and (un)comment the @timeout(...) line, you will see that the behaviour is changed a lot (I suspect one or more deep copies of self.L are created by timeout). Note that it keeps happening with a manager instead of a list (which I used at first).

I'd say two elements need to be taken into consideration here:

  • first, the warning on the module's README doesn't seem sufficient. It clearly impacts multithreading whatever your functions return; I'm suspecting that the module should be flagged unsafe for multithreading.
  • secondly: I don't understand why this is still hapenning with a manager. Is this a bug and can/should it be resolved ? For instance, if I keep the manager and move from multithreading to multiprocessing (replacing pebble.ThreadPool by pebble.ProcessPool), everything is now fine.

tgrandje avatar Dec 14 '22 13:12 tgrandje