Leo Fang
Leo Fang
What If we do this? We first look up `MSMPI_VER` from `mpi.h` (which on conda-forge is defined to 0xA01). If it's equal to the default (0x100) it means it's likely...
> The `cmake` log tells nothing about what went wrong during mpich build. @hzhou Unfortunately there's not much I can share other than directing you to the full failed CI...
Maybe we should also accept `MPI4PY_CFG` as an alias of `MPICFG`...
If you're on Summit, Spectrum MPI isn't your only choice. IIRC you can also built against Open MPI. How about changing to another MPI backend and retrying the same code?
Maybe relevant to cupy/cupy#4892?
> I spent some time now trying to do that and it seems indeed there's no simple, clean way to do so. In [d789117](https://github.com/rapidsai/rmm/commit/d7891178b1c6d3bf52b28862e76efab2e71a19ea) I exposed the definitions for `cudaStreamLegacy`/`cudaStreamPerThread`...
I have been devising an interface for C/C++ libraries to share a memory pool, and I would need a way to expose this interface all the way to Python so...
Hi @chkothe, thanks for your very detailed report! It's valuable and as a CuPy contributor I'm happy to hear both CuPy and the array API help your work. Sorry to...
Because it's a bit confusing if I am in the API doc and wanna jump back to the landing (home) page. I hit 404 several times so I decided to...
> Maybe it'd be a matter of running one CI job against `main` of https://github.com/data-apis/array-api-tests? Sounds like a good idea to start with. > What info are you including in...