hoomd-blue
hoomd-blue copied to clipboard
Fix segfaults for MPCD with large particle counts under MPI
Description
This PR refactors parts of the MPCD code that relied on generic serialized MPI routines to instead use custom MPI datatypes. This increases the amount of data that can be sent because each object is larger. This is an attempt to fix segfaults reported when either initializing from or taking a snapshot with a large number of MPCD particles.
I replaced most uses of bcast with MPI_Bcast while I was at it since this is a relatively simple MPI call.
I also added exceptions when the serialized methods exceed the byte count that can be stored in a signed int.
Motivation and context
Large numbers of particles seem to lead to overflow of the serialized MPI methods.
Resolves #1895
How has this been tested?
Existings tests pass. I will confirm with reporting user (or test on a cluster myself) that this fixes the segfaults they were observing.
Change log
* Fix segmentation faults initializing MPCD simulations with large numbers (~100+ million) of MPCD particles under MPI.
Checklist:
- [X] I have reviewed the Contributor Guidelines.
- [X] I agree with the terms of the HOOMD-blue Contributor Agreement.
- [X] My name is on the list of contributors (
sphinx-doc/credits.rst) in the pull request source branch.
This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.
Not stale—haven't had time to finish this off.
This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.
@joaander I will rebase this onto trunk-major and finish this week.
@joaander this should be ready to take a look at! I updated the regular particle communicator to use these new data structures, but I didn't touch the use of bcast, scatter_v, etc. outside MPCD. I did add code to throw an exception if the serialization buffer sizes will overflow an int.
FYI: In running with one of the other branches that didn’t have this fix, I found that some MPI libraries internally check and report the error we were unintentionally triggering (count too large). This supports that this is the right fix.
FYI: In running with one of the other branches that didn’t have this fix, I found that some MPI libraries internally check and report the error we were unintentionally triggering (count too large). This supports that this is the right fix.
It is strange that the MPI API hasn't addressed this defect more aggressively. It has been 25 years since 64 bit processors have become commonplace.
It is indeed strange and frustrating. It should be addressed as part of the MPI-4 standard, but that will probably take a while to percolate through the implementations.
https://www.mpi-forum.org/docs/mpi-4.1/mpi41-report/node499.htm#Node499