openPMD-api icon indicating copy to clipboard operation
openPMD-api copied to clipboard

HDF5 Option for OPENPMD_HDF5_INDEPENDENT

Open ax3l opened this issue 2 years ago • 5 comments

Although our API contract allows independent store/load calls (as in ADIOS), everything that is MPI-I/O based will essentially be performing better if the collective MPI-I/O calls are used.

Thus, the env control OPENPMD_HDF5_INDEPENDENT should be translated into an option for HDF5, so users that guarantee that they do collective storeChunk calls can activate it programmatically for performance.

Note that in MPI-I/O (and thus PHDF5) that means ranks with zero contribs need to issue zero-sized storeChunks. The rationale behind that is that although an MPI rank might not contribute data, it might end up being a collection rank in MPI for collective data transport to disk.

ax3l avatar Aug 18 '23 17:08 ax3l

Refs.:

  • Quincy Koziol's ATPESC23 or HUG23 slides (to be linked)
  • @jeanbez's HUG23 slides https://docs.google.com/presentation/d/1AdQNynZ7Qe40fdwNQ_MvZFBPVWZd4x6GuCUx6IkfWxU/edit#slide=id.g23ba479a05a_0_7

ax3l avatar Aug 18 '23 17:08 ax3l

As a side note, there are non-blocking MPI-I/O ops now coming to MPI (cutesy of Quincy Koziol's work on the standard). HDF5 is planning to use it to introduce async APIs as well.

In the future, that would allow a workflow similar as we have in ADIOS1/2: independent storage calls for chunks and maybe even attributes, with a collective call needed to kick off the writes/reads.

ax3l avatar Aug 18 '23 17:08 ax3l

Other details:

  • check H5Pget_mpio_actual_io_mode and of not as expected, H5Pget_mpio_no_collective_cause

Things like accidental datatype conversions, dataspace conversion (both between declaration and write) and too small I/O requests (<FS block size) might break collectives into independent I/O.

ax3l avatar Aug 18 '23 17:08 ax3l

As a side note, there are non-blocking MPI-I/O ops now coming to MPI (cutesy of Quincy Koziol's work on the standard). HDF5 is planning to use it to introduce async APIs as well.

In the future, that would allow a workflow similar as we have in ADIOS1/2: independent storage calls for chunks and maybe even attributes, with a collective call needed to kick off the writes/reads.

Side note, we are now testing OpenPMD with CACHE+ASYNC VOL connectors. Using the ASYNC VOL directly had some issues related to attributes that prevented the operations from being fully asynchronous and benefiting from it, but stacking it with the CACHE VOl seems to do the trick.

jeanbez avatar Aug 23 '23 17:08 jeanbez

Note the API of Series::flush():

    /** Execute all required remaining IO operations to write or read data.
     *
     * @param backendConfig Further backend-specific instructions on how to
     *                      implement this flush call.
     *                      Must be provided in-line, configuration is not read
     *                      from files.
     */
    void flush(std::string backendConfig = "{}");

I imagine that series.flush("hdf5.independent_stores = false") would be a relatively straightforward API to expose this feature.

franzpoeschel avatar Sep 05 '23 10:09 franzpoeschel