bosilca
bosilca
```diff diff --git a/parsec/mca/pins/pins.c b/parsec/mca/pins/pins.c index 6e1e7a6c6..f75c6fdc6 100644 --- a/parsec/mca/pins/pins.c +++ b/parsec/mca/pins/pins.c @@ -12,6 +12,8 @@ #include "parsec/constants.h" #include "parsec/utils/debug.h" #include "parsec/execution_stream.h" +#include "parsec/parsec_internal.h" +#include "parsec/parsec_binary_profile.h" /** * Mask for...
If the deterministic order of your application is dictated by the MPI message order constraints (in order matching), then the only way to fail the determinism test is to not...
If the counting is incorrect either we are dropping messages or there are threads and the counting is not atomic. In both cases it is difficult to assess without a...
I had to fix the code you pasted to make it readable. Please check that I didn't alter the logic of the code. This issue is confusing. You start by...
Which buffer are we talking about here ? If you can post a reproducer it will simplify the entire discussion. Based on the code you provided I created a simple...
MPI does not define receive fairness between multiple peers. If you above assumption is right (aka some ranks are overwhelmed by incoming messages), then adding some flow control should help....
Then buffering is not the problem. I would suggest you start digging into your app communication pattern.
A synchronous send will block all future communications between 2 peers until the receiver receive the synchronous message. So, in an application where the communication pattern per round follows a...
@pflee2002 your messages are extremely small, it seems unlikely to run out of memory. Let's make a quick computation. Let's assume that OMPI allocated 1KB for each message (this is...
It is not safe to use probe+ireceive on a multithreaded application, especially if multiple threads drain the network. This is well documented and the standard explains in detail the potential...