Celso L. Mendes

Results 42 comments of Celso L. Mendes

I rested today the code listed above, with the current Unify version, and keeping the setting of the MPI_Info as originally set in the code. Running it on 8 processors,...

And for the sake of completeness, I also reran the original ROMIO test (atomicity.c) under Unify, using 8 processors on 2 catalyst nodes (i.e. 4 processors/node), and it ran fine....

I ran today on Lassen (IBM) the test-code listed above, using eight processors on a single node and a smaller dataset size (BUFSIZE=100), with the settings of MPI_Info commented out,...

These are some of my notes about the contents of the slides from the Unify Tutorial on 2/5/2020: Slide-17: I believe the correct build commands are "cd UnifyFS" and "./bootstrap.sh"...

A few more additional comments on this problem, based on some more tests done today. I got the same error using the dev version of Unify from about two months...

@kathrynmohror , as noted by @adammoody and @MichaelBrim above, I added a call to MPI_File_sync() in the example, between the calls to MPI_File_write_at_all and MPI_Barrier. With that, the code under...

Quincey, from HDF5, recommends that we try their atomicity call: https://support.hdfgroup.org/HDF5/doc1.8/RM/RM_H5F.html#File-SetMpiAtomicity This could make the code slower, but it would be equivalent to a sync/barrier/sync and would make operations safer,...

I tested the example above today with the insertion of this call after the MPI_File_open: MPI_File_set_atomicity(fh, 1); The behavior did not change: it still reads only 5 bytes from the...

@adammoody , I tested this today, after getting a fresh pull of dev as you suggested, which includes PR-569. I tested the original mpi-io code above (i.e. without any MPI_File_sync...

@adammoody , @kathrynmohror : I'm definitely having no luck with the "new-margotree-abtsync-filehash" branch. I've been doing tests with it for the last three days, beyond the test above that I...