mpi
mpi copied to clipboard
Design and implement a high-level library on top of this one.
Potential primitives:
- [x] Distributed FSIO:
- Distributed reading/writing of partial/complete data to/from multiple processes without user intervention.
- Perhaps just accept HDF5 as a de-facto standard and write a wrapper for it (it is already based on MPI-IO).
- Inspiration: HDF5, NetCDF, ADIOS.
- Resolution: Accepted HDF5. See https://github.com/acdemiralp/hdf for a modern interface.
- [x] Sync Variables:
- Variables which are synchronized across all processes (in a communicator?) without explicit calls to collectives.
- Inspiration: Unity3D/Unreal/RakNet network SyncVars.
- Resolution: Implemented. See
mpi/extensions/shared_variable.hpp.
- [ ] Sync Memory:
- Abstraction of a memory region in which:
- An object constructed or destroyed in one process is constructed or destroyed across all processes.
- Members of all objects are kept in sync across all processes.
- A function called in one process is called across all processes.
- Objects can reference / point to each other (pointers are valid for use across processes - virtual pointers).
- Builds on sync variables and RPCs.
- Inspiration: Unity3D/Unreal maps with networking.
- Abstraction of a memory region in which:
- [ ] Distributed Shared Memory:
- Treat the combined memory of all processes as one i.e. partitioned global address space (PGAS).
- PGAS versions of STL containers.
- PGAS versions of
<algorithm>. - Probably using
mpi::window. - See: DASH: Data Structures and Algorithms with Support for Hierarchical Locality.
- See: UPC++: A PGAS Extension for C++
- Objection: I doubt that I can create something as elaborate as DASH. I should just accept DASH as a de-facto standard for PGAS in C++. It even supports C++17.
- Objection to objection: DASH is bloated to the throat with scripts, makefiles, site-specifics, i.e. a workflow that distrupts the user's own workflow. The user has to take a break from programming his own work, and have to spend time building, installing and making sure that DASH works. I can just provide a few STL style headers to achieve PGAS. Specifically:
<pgas/algorithm><pgas/array><pgas/iterator><pgas/numeric>
- [ ] Task Graphs:
- Define streamlined inputs and outputs (i.e. resources) for each point-to-point and collective operation (i.e. tasks).
- Allow user to construct a directed acyclic graph of resources and tasks.
- Automatically place barriers for resources referred by multiple tasks.
- This would achieve complex communication operations with understandable code.
- Inspiration: Intel TBB
- [ ] Remote Procedure Calls:
- Functions which are simultaneously called on all processes (in a communicator?)
- Inspiration: Unity3D/Unreal/RakNet network commands/RPCs.
- See: Mercury: Enabling remote procedure call for high-performance computing
- [ ] Socket-like Communication:
- Real-time (always on, passive), bidirectional, low-latency communication across all processes (in a communicator?).
- Subscribe to a group/topic with a key and a callback which gets called whenever a message with the key is received.
- Perhaps a notion of "lobby" and "room" for ranks.
- Request/reply, publish/subscribe, pipeline, exclusive pair patterns.
- Inspiration: Socket.io, ZeroMQ.
- See: MPI/RT-an emerging standard for high-performance real-time systems