Valentin Churavy

Results 1413 comments of Valentin Churavy

``` using MPIClusterManagers, Distributed man = MPIManager(np = 4) addprocs(man) # backend communicates with MPI @everywhere begin using Elemental using LinearAlgebra end using DistributedArrays A = drandn(50,50) Al = Matrix(A)...

Yeah the MPIClusterManager is necessary for the variant of the code that uses DistributedArrays otherwise the ranks are not wired-up correctly.

> should I open an issue Please do :)

Without srun, but with salloc?

I am confused. MPIClusterManager should use `srun`/`mpiexec` to connect the "worker" processes which allows them to use Elemental and to communicate among themselves using MPI. The front-end should not need...

Oh yeah that makes sense. That is rather unfriendly behavior on the Cray side... We still load the Elemental library on the front-end process.

Maybe we can use the PMI cluster manager I wrote https://github.com/JuliaParallel/PMI.jl/blob/main/examples/distributed.jl

There is an extension of HAMT's called CTrie https://en.wikipedia.org/wiki/Ctrie that adds concurrency support. The rust implementation of CTrie uses harzard pointers. https://github.com/ballard26/concurrent-hamt

The answer here is that we need to add them to the registry.

So the idea of the mangling is explicitly that the Nvidia tools can give you better results by demangling it so that the arguments are readable. cc: @maleadt