ParallelStencil.jl icon indicating copy to clipboard operation
ParallelStencil.jl copied to clipboard

Multithreaded array initialization

Open carstenbauer opened this issue 3 years ago • 8 comments

For better performance on systems with multiple NUMA domains. See my extensive comment on discourse.

With this PR, I get about 40% speedup for this example (with USE_GPU=false) when using a full AMD Zen3 CPU (64 cores, 4 NUMA domains) of Noctua 2.

Timings (s) before

╭───────────┬─────────┬─────────┬─────────╮
│ # Threads │       1 │       8 │      64 │
├───────────┼─────────┼─────────┼─────────┤
│   compact │ 12.8708 │ 2.42357 │ 2.43713 │
│    spread │ 12.8708 │ 2.38331 │  3.3897 │
╰───────────┴─────────┴─────────┴─────────╯

Timings (s) after

╭───────────┬─────────┬─────────┬─────────╮
│ # Threads │       1 │       8 │      64 │
├───────────┼─────────┼─────────┼─────────┤
│   compact │ 12.8762 │ 2.41895 │ 1.51899 │
│    spread │ 12.8762 │ 2.35042 │ 2.08579 │
╰───────────┴─────────┴─────────┴─────────╯

Speedup in %

╭───────────┬─────┬─────┬──────╮
│ # Threads │   1 │   8 │   64 │
├───────────┼─────┼─────┼──────┤
│   compact │ 0.0 │ 0.0 │ 38.0 │
│    spread │ 0.0 │ 1.0 │ 38.0 │
╰───────────┴─────┴─────┴──────╯

NOTES:

  • We see that the changes have essentially no impact on the single threaded case but give speedups when run with many threads (on a multi-NUMA domain system).
  • We see that if we stay within one NUMA domain (e.g. 8 threads) we don't observe a speedup (as expected).
  • compact and spread indicate the thread pinning strategy.
  • Ideally, the access pattern of the parallel initialization should match the access pattern of the stencil as much as possible. In this PR, I just do the "trivial" parallel initialization. (In principle, one could think about passing the custom user kernel to @zeros and co, analyze its structure and then initialize "accordingly". But that's difficult...)

cc @luraess @omlins

PS: Working on it at the GPU4GEO Hackathon in the Schwarzwald 😉

carstenbauer avatar Oct 04 '22 14:10 carstenbauer

Thanks for the contribution. I guess having something in PS for the Threads backend to control pinning and threads to cores mapping (or have an close to optimal default solution) would be great! Especially for AMD cpus with many NUMA regions where this becomes significant.

luraess avatar Oct 04 '22 17:10 luraess

BTW, @omlins, depending on how easy/difficult it would be to give me test access to Piz Daint I could run some benchmarks there as well.

carstenbauer avatar Oct 05 '22 08:10 carstenbauer

@carstenbauer, as Ludovic told you probably already, Piz Daint does not have any AMD CPUs. Thus, for testing this Superzack, Ludovic's cluster, will be better.

omlins avatar Oct 06 '22 16:10 omlins

I quickly tested another example, namely https://github.com/omlins/ParallelStencil.jl/blob/main/miniapps/acoustic3D.jl (with the visualization/animation part commented out. Same configuration as above, i.e. a 64 core node of Noctua 2 with 64 Julia threads that I pinned compactly. Below are the timings of the acoustic3D() function before and with this PR.

# Before PR: 44.315157 seconds (779.52 k allocations: 840.038 MiB, 1.09% gc time)
# With PR: 18.557505 seconds (791.20 k allocations: 840.475 MiB, 2.71% gc time)

This corresponds to about a 2.4x speedup. (cc @luraess)

carstenbauer avatar Oct 21 '22 12:10 carstenbauer

This relates also to https://github.com/omlins/ParallelStencil.jl/issues/53#issuecomment-1086978245

omlins avatar Dec 12 '22 10:12 omlins

What's holding back merging this?

carstenbauer avatar Jul 03 '23 07:07 carstenbauer

Bump

ranocha avatar Sep 13 '23 09:09 ranocha

@omlins bump

luraess avatar Oct 19 '23 09:10 luraess