mpi4py icon indicating copy to clipboard operation
mpi4py copied to clipboard

Python bindings for MPI

Results 18 mpi4py issues
Sort by recently updated
recently updated
newest added

Don't assume IPv4 sockets are available in TestDPM.testJoin (test_dynproc.py). Instead, use the family reported available by socket.getaddrinfo(). Use INET (IPv4) by default if available, otherwise INET6 (IPv6) or UNIX sockets....

test

Debian reports that mpi4py fails to build on IPV6-only build daemons, reported in Debian [Bug#1015912](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1015912). A build log is available at https://buildd.debian.org/status/fetch.php?pkg=mpi4py&arch=amd64&ver=3.1.3-2%2Bb1&stamp=1658589239&raw=0 The error message is in testJoin (test_dynproc.TestDPM): ```...

when i use command "module load mpi/mpich-$(arch)",it return Traceback (most recent call last) : ERROR: Unable to locate a modulefile for 'mpi/mpich-x86_64' then I use command "module avail" and the...

Architecture: Power9 (Summit Super Computer) MPI Version: Package: IBM Spectrum MPI Spectrum MPI: 10.4.0.03rtm4 Spectrum MPI repo revision: IBM_SPECTRUM_MPI_10.04.00.03_2021.01.12_RTM4 Spectrum MPI release date: Unreleased developer copy MPI4py Version: 3.1.1 Reproduce...

@dalcinl I created this issue because I heard that there are some questions/suggestions about Fujitsu MPI.

I'm having a problem with data corruption when using `allgather`, but only one of the HPC systems we use. I think the problem is very likely somewhere in the infiniband...

See https://github.com/mpi4py/mpi4py/issues/19#issuecomment-826221898.

`__getitem__` with a slice might work, but it's easier to just disable them altogether.

sparc64 is not the most common architecture around, but for what it's worth 3.1.2 has started giving a Bus Error (Invalid address alignment) in testPackUnpackExternal (test_pack.TestPackExternal), ``` testProbeRecv (test_p2p_obj_matched.TestP2PMatchedWorldDup) ......

Our application, PyFR, makes very successful use of mpi4py and has support for CUDA-aware MPI implementations. Here, however, our biggest issue is knowing if the MPI distribution we are running...