pydash icon indicating copy to clipboard operation
pydash copied to clipboard

Support for multithreading.

Open Drvanon opened this issue 11 months ago • 1 comments

Chains offer an amazing opportunity for parallalelization, since unless a call to "thru" is encountered (or a function accepts the whole input), all calls can be parallelized. Right now, when I execute the following:

>>>  py_(range(5)).map(time.sleep).for_each(lambda _: datetime.now()).value()
[datetime.datetime(2023, 8, 16, 13, 22, 11, 875152),
 datetime.datetime(2023, 8, 16, 13, 22, 11, 875152),
 datetime.datetime(2023, 8, 16, 13, 22, 11, 875152),
 datetime.datetime(2023, 8, 16, 13, 22, 11, 875152),
 datetime.datetime(2023, 8, 16, 13, 22, 11, 875152)]

Where as I believe that the following output would also be quite possible:

>>>  py_(range(5)).map(time.sleep).for_each(lambda _: datetime.now()).value()
[datetime.datetime(2023, 8, 16, 13, 22, 11, 875152),
 datetime.datetime(2023, 8, 16, 13, 22, 12, 875152),
 datetime.datetime(2023, 8, 16, 13, 22, 13, 875152),
 datetime.datetime(2023, 8, 16, 13, 22, 14, 875152),
 datetime.datetime(2023, 8, 16, 13, 22, 15, 875152)]

Drvanon avatar Aug 16 '23 11:08 Drvanon

I was thinking about this earlier today. Though possible (and significantly benificial) to identify chain sections that do not depend on each other and perform those parts in parallel, that might be very challenging. What might be simpler is to provide a parallel API for the map functions, where the pydash.collections.itermap function is replaced with a threaded_map function.

Something maybe like:

import multiprocessing
pool = multiprocessing.Pool()

def pooled_iter_map(collection, iteratee):
    return pool.map(collection, iteratee)

def pooled_map(collection, iteratee):
    return list(pooled_iter_map(collection, iteratee))

def pooled_flat_map(collection, iteratee=None):
    return pyd.flatten(pooled_iter_map(collection, iteratee=iteratee))

Drvanon avatar Aug 17 '23 10:08 Drvanon