ginkgo
ginkgo copied to clipboard
Add a uniform coarsening algorithm for coarse grid generation.
This PR adds a new class to generate coarse matrices which can be useful for multigrid and other multi-level methods in general.
Two main methods to generate coarse matrices are provided:
- A naive one with user providing a constant jump between the rows.
- ~~A user specified coarse row array.~~ (Separate class and separate PR)
TODO:
- [x] Add omp, ref, cuda and HIP tests.
- [x] Merge #986
Due to missing device kernels for creation of submatrix with IndexSet, the creation of the submatrix from the index set will be in a separate PR.
I mentioned this in the meeting before: When I hear the description of this functionality, I think "uniform". In my view, selection usually refers to finding a single element, or a small number of elements. The operations you are doing is a coarsening, so how about UniformCoarsening
? This can probably also be used to implement a simple 1D Geometric Multigrid, right?
Naming it Coarsening
is a good idea. How about SimpleCoarsening
? Because you have the ability to specify an array for the coarsening indices, Uniform
might not be the best name.
Yes, this should allow us to implement a simple 1D Geometric Multigrid.
These two might be different enough that we could put them into different types? FixedCoarsening and UniformCoarsening? Do we have any generic strategies for describing or picking the averaging operation? That could be just picking one of the fine nodes, (un-)weighted average, ...
Yes, we could technically put them in two separate classes. I guess at some point we will have a lot of separate classes. But I guess that might be okay if they have different algorithms.
Currently, this is just picking some nodes, but we could technically have different averaging strategies. This could be as simple as having a sum-normalized per row for the restriction and prolongation matrices to more involved averaging strategies based on diagonal and off-diagonal fine matrix element weights
I was thinking that our AMGX implementation does two things: a heavy edge matching to compute the fine-to-coarse mapping, and a uniform averaging scheme. Maybe it might make sense to look at what hypre is doing here? I think they have some averaging schemes specific to certain coarsening algorithms, but also generic ones? Anyways, this is only long-term thinking, not directly related to this PR, I am fine with the approach :)
Yes, I think generation of the restriction and prolongation matrices can be de-coupled from the coarsening strategy itself in some cases. I don't have a clear solution for that now (interface wise), but it might be something we should look at in the future.
the multigridLevel is the general thing. I do not put it as the coarsening method and the interpolator methods directly. The coarsening method might generate different information such that not all interpolator can rely on it. The MultigridLevel Factory can has option for intepolator but it should be decided by the coarsening method. For example, Agg method might give the aggregation group in some sense (the agg group def might be different in different alg), the coarsening method gives the C-F. the information is not direct. Leave the intepolator decided by the the coarsening method to give full flexibilty for it. The interpolator might not fit in the LinOpFactory directly. in Pgm, the intepolator uses the agg information and the size only.
Codecov Report
Merging #979 (0e30d47) into develop (0622251) will increase coverage by
0.51%
. The diff coverage is97.47%
.
:exclamation: Current head 0e30d47 differs from pull request most recent head c2ac3f8. Consider uploading reports for the commit c2ac3f8 to get more accurate results
@@ Coverage Diff @@
## develop #979 +/- ##
===========================================
+ Coverage 91.77% 92.28% +0.51%
===========================================
Files 499 486 -13
Lines 42972 40358 -2614
===========================================
- Hits 39439 37246 -2193
+ Misses 3533 3112 -421
Impacted Files | Coverage Δ | |
---|---|---|
core/device_hooks/common_kernels.inc.cpp | 0.00% <0.00%> (ø) |
|
include/ginkgo/core/matrix/csr.hpp | 45.53% <ø> (+2.17%) |
:arrow_up: |
omp/base/index_set_kernels.cpp | 94.11% <ø> (ø) |
|
reference/base/index_set_kernels.cpp | 94.11% <83.33%> (-0.09%) |
:arrow_down: |
core/base/index_set.cpp | 96.29% <92.30%> (-1.44%) |
:arrow_down: |
...ence/test/multigrid/uniform_coarsening_kernels.cpp | 96.03% <96.03%> (ø) |
|
...n/unified/multigrid/uniform_coarsening_kernels.cpp | 100.00% <100.00%> (ø) |
|
core/matrix/csr.cpp | 94.86% <100.00%> (+1.45%) |
:arrow_up: |
core/multigrid/amgx_pgm.cpp | 100.00% <100.00%> (ø) |
|
core/multigrid/uniform_coarsening.cpp | 100.00% <100.00%> (ø) |
|
... and 182 more |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update fb224a5...c2ac3f8. Read the comment docs.
Error: The following files need to be formatted:
test/multigrid/uniform_coarsening_kernels.cpp
You can find a formatting patch under Artifacts here or run format!
if you have write access to Ginkgo
Note: This PR changes the Ginkgo ABI:
Functions changes summary: 0 Removed, 0 Changed, 968 Added functions
Variables changes summary: 0 Removed, 0 Changed, 0 Added variable
For details check the full ABI diff under Artifacts here
Closing stale PR