ginkgo
ginkgo copied to clipboard
Add Distributed Multigrid.
This PR updates the multigrid class to handle distributed matrices and hence allows preconditioning and solution with distributed multigrid.
Major changes
- Store row and column partition objects in the Matrix class to use within Multigrid.
- Template the memory allocation and multigrid core functions on VectorType and allow dynamic switching between the two.
- Store matrix_data object in the distributed matrix class to be able to generate coarse matrices.
Of course, as there is no coarse generation method, we cannot still use distributed multigrid automatically, but that will be added in a future PR.
Points of discussion
- We probably need to store the partition objects in the distributed matrix class, but I am open to other alternatives.
- For ease, we also probably need to store the matrix_data object (or devie_matrix_data), but I am not very happy about this.
Issues
- The mixed precision version of distributed multigrid does not yet work and needs to be looked into.
Would that allow to use Multigrid as a preconditioner without Schwarz?
Right now, you use the partition only to get the local size of the matrix, which you can also get from the local matrix. The stored matrix data is not used at all. I would suggest removing these until they are actually necessary.
@greole , Yes, but a coarse generation algorithm ( that is distributed capable) is necessary. Meaning we need to have the equivalent of AMGX which generates the triplet (R, A_c, P).
@MarcelKoch, yes, I dont intend to merge this yet. I just wanted to show what changes could be necessary. At present we do not need the partition or the matrix_data.
@pratikvn could you rebase it? I think some changes are related to the schwarz?
@greole yes, the coarsening may affect the non_local_matrix, but coarsening method will take care of it not the multigrid itself. In distributed multigrid, we have prepared the distributed matrix each level already, so we only need to deal with the vector according to each distributed matrix (the usage of local size here for creating the vector).
@greole, yes, #1403 would need to be merged as well. @yhmtsai , please feel to merge this when you are ready. If you need me to merge it, let me know.