sdc icon indicating copy to clipboard operation
sdc copied to clipboard

WIP: interface for map-reduce style kernels

Open Hardcode84 opened this issue 6 years ago • 1 comments

This PR adds new APIs to be used by pandas functions implementers to help parallelize theirs kernels:

  • map_reduce(arg, init_val, map_func, reduce_func)
  • map_reduce_chunked(arg, init_val, map_func, reduce_func)

Parameters:

  • arg - list-like object (it can be python list, numpy array or any other object with similar interface)
  • init_val - initial value
  • map_func - map function which will be applied to each element/elements range in parallel (on different processes of on different nodes)
  • reduce_func - reduction function to combine initial value and results from different processes/nodes

The difference between these two functions:

  • map_reduce will apply map function to each element in range (map function must take single element and return single element) and then apply reduce function pairwise (reduce function must take two elements and return single element)
  • map_reduce_chunked will apply map function to range of elements, belonging to current thread/node (map function must take range of elements as paramenter and return list/array as result) and then apply reduce to entire ranges (reduce function must take two ranges as parameters and return list/array)

You can also call any of these functions from inside map or reduce func to support nested parallelism.

These functions usable for both thread/mpi parallelism.

If you call them from numba @njit function they will be parallelized by numba buiilt-in parallelisation machinery.

If you call them from @hpat.jit they will be distributed by hpat parallelisation pass (doesn't work currently)

Wrote parallel series sorting (numpy.sort + hand-written merge) as example.

Current issues:

  • Thread parallel sort isn't working due to numba issue https://github.com/numba/numba/issues/4806
  • MPI parallelisation doesn't work entirely (lot of issues, bigger one is that hpat support only very limited list of built-in functions (sum, mult, min, max) for parfor reductions)
  • Parallel sort handles NaNs differently from numpy.sort, need to fix
  • Threads/nodes count in map_reduce_chunked handcode as 4, will fix
  • Proper documentation

The second part of this PR is distribution depth knob to (not-so)fine-tune nested parallelism between distribution and threading:

  • New environment variable SDC_DISTRIBUTION_DEPTH controls how much nested parallel loops will be distributed by DistributionPass
  • Distributed loops are any of newly introduced map_reduce* functions or manually written prange loops.
  • Default value is 1 which means that only the most outer loop will be distributed by mpi, then next loop will parallelised by numba, and then all deeper loops will be executed sequentually (as numba doesn't support nested parallelisation)
  • Set SDC_DISTRIBUTION_DEPTH to 0 to disable distribution.
# SDC_DISTRIBUTION_DEPTH=1
for i in prange(I) # distributed by DistributedPass
    for j in prange(J) # parallelised by numba
        for k in prange(K) # executed sequentually

Hardcode84 avatar Nov 12 '19 10:11 Hardcode84

Hello @Hardcode84! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:

There are currently no PEP 8 issues detected in this Pull Request. Cheers! :beers:

Comment last updated at 2019-11-12 11:16:09 UTC

pep8speaks avatar Nov 12 '19 10:11 pep8speaks