AlgebraicMultigrid.jl icon indicating copy to clipboard operation
AlgebraicMultigrid.jl copied to clipboard

Default ruge_stuben solver diverges for a Poisson problem, due to problematic interpolation

Open learning-chip opened this issue 2 years ago • 3 comments

To reproduce

Download this FEMLAB/poisson3Db file and run:

using MatrixMarket
import AlgebraicMultigrid as AMG

A = MatrixMarket.mmread("poisson3Db.mtx")
b = MatrixMarket.mmread("poisson3Db_b.mtx") |> vec

ml = AMG.ruge_stuben(A)
x_amg = AMG._solve(ml, b, verbose=true, maxiter=20)

The residual diverges:

Norm of residual at iteration      1 is 1.8735e+06
Norm of residual at iteration      2 is 2.1074e+06
Norm of residual at iteration      3 is 3.6537e+06
Norm of residual at iteration      4 is 5.8408e+06
Norm of residual at iteration      5 is 9.5234e+06
Norm of residual at iteration      6 is 1.6343e+07
Norm of residual at iteration      7 is 2.9776e+07
Norm of residual at iteration      8 is 5.7516e+07
Norm of residual at iteration      9 is 1.1839e+08
Norm of residual at iteration     10 is 2.7924e+08
Norm of residual at iteration     11 is 9.3488e+08
Norm of residual at iteration     12 is 4.4738e+09
Norm of residual at iteration     13 is 2.4183e+10
Norm of residual at iteration     14 is 1.3371e+11
Norm of residual at iteration     15 is 7.4188e+11
Norm of residual at iteration     16 is 4.1185e+12
Norm of residual at iteration     17 is 2.2864e+13
Norm of residual at iteration     18 is 1.2693e+14
Norm of residual at iteration     19 is 7.0469e+14
Norm of residual at iteration     20 is 3.9122e+15

For a good coarse grid transfer, the column sum of restriction operator R (or equivalently the row sum of prolongation P) should be approximately equal to 1. However, the resulting operator contains sum as large as 6.

R_sum = vec(sum(ml.levels[1].R, dims=1))

maximum(R_sum)  # 6.5, way too big
sum(R_sum .> 1.5)  # 9174

using Plots
plot(R_sum)  # show all sums

Compare with PyAMG default

PyAMG with default settings is able to converge:

import pyamg
from scipy.io import mmread

A = mmread("poisson3Db.mtx").tocsr()
b = mmread("poisson3Db_b.mtx").flatten()

ml = pyamg.ruge_stuben_solver(A)

residuals = []
x_amg = ml.solve(b, residuals=residuals, maxiter=20)
print(residuals)
[1873512.4673384393,
 233255.5159887513,
 109241.08890436463,
 70342.75287986077,
 51906.30512772909,
 41392.049922233826,
 34744.68012516487,
 30219.93538009633,
 26944.726717410056,
 24444.551602043706,
 22448.232978979628,
 20794.899486606348,
 19385.86164700615,
 18158.491431757277,
 17071.605472539315,
 16097.097721476017,
 15215.053123024978,
 14410.837321621537,
 13673.326931758267,
 12993.808948353657,
 12365.279421869809]

The interpolation operators also looks good:

import numpy as np
import matplotlib.pyplot as plt

R_sum = np.squeeze(np.asarray(ml.levels[0].R.sum(axis=0)))
R_sum.max()  #  1.26
np.sum(R_sum > 1.5)  # 0

plt.plot(R_sum, marker='.', linewidth=0, alpha=0.2)  # all nicely bounded

By default, both PyAMG and AMG.jl use theta=0.25 for strength of connection, and symmetric Gauss-Seidel as smoother, so their behaviors should have been similar. Maybe some subtle problems in the interpolation code. Haven't fully figured it out yet.

learning-chip avatar Mar 03 '22 10:03 learning-chip

Actually smoothed_aggregation also diverges, while pyamg.smoothed_aggregation_solver with exactly the same setting works fine.

ml_sa = AMG.smoothed_aggregation(A)
AMG._solve(ml_sa, b, verbose=true, maxiter=20)

gives:

Norm of residual at iteration      1 is 1.8735e+06
Norm of residual at iteration      2 is 1.5396e+07
Norm of residual at iteration      3 is 9.8877e+08
Norm of residual at iteration      4 is 6.4761e+10
Norm of residual at iteration      5 is 4.2368e+12
Norm of residual at iteration      6 is 2.7722e+14
Norm of residual at iteration      7 is 1.8138e+16
Norm of residual at iteration      8 is 1.1868e+18
Norm of residual at iteration      9 is 7.7650e+19
Norm of residual at iteration     10 is 5.0806e+21
Norm of residual at iteration     11 is 3.3242e+23
Norm of residual at iteration     12 is 2.1750e+25
Norm of residual at iteration     13 is 1.4231e+27
Norm of residual at iteration     14 is 9.3112e+28
Norm of residual at iteration     15 is 6.0923e+30
Norm of residual at iteration     16 is 3.9861e+32
Norm of residual at iteration     17 is 2.6081e+34
Norm of residual at iteration     18 is 1.7065e+36
Norm of residual at iteration     19 is 1.1165e+38
Norm of residual at iteration     20 is 7.3054e+39

Both AMG.jl and PyAMG generate the same coarse grid hierachy (below), but their interpolation weights are quite different.

Multilevel Solver
-----------------
Operator Complexity: 1.022
Grid Complexity: 1.009
No. of Levels: 3
Coarse Solver: Pinv
Level     Unknowns     NonZeros
-----     --------     --------
    1        85623      2374949 [97.85%]
    2          777        52035 [ 2.14%]
    3            5           25 [ 0.00%]

learning-chip avatar Mar 04 '22 01:03 learning-chip

It is actually the smoother's fault. Swapping out the built-in smoother by IterativeSolvers: gauss_seidel! solves the problem.

using MatrixMarket
import AlgebraicMultigrid as AMG
import IterativeSolvers: gauss_seidel!

A = MatrixMarket.mmread("poisson3Db.mtx")
b = MatrixMarket.mmread("poisson3Db_b.mtx") |> vec

ml = AMG.ruge_stuben(
    A,
    presmoother = (A, x, b) -> gauss_seidel!(x, A, b),
    postsmoother = (A, x, b) -> gauss_seidel!(x, A, b)
)

AMG._solve(ml, b, verbose=true, maxiter=20)  # now converges

gauss_seidel! should be functionally equivalent to AMG.GaussSeidel(AMG.ForwardSweep(), 1),, but in practice the results are quite different, maybe due to numerical instability.

learning-chip avatar Mar 09 '22 08:03 learning-chip

Since I am currently working a bit myself on AMG stuff I take a deeper look into this. The problem seems to be that your matrix is non-symmetric (see https://www.cise.ufl.edu/research/sparse/matrices/FEMLAB/poisson3Db.html ), but the internal Gauss-Seidel uses the assumption that the provided CSC matrix is symmetric (i and j are swapped, see https://github.com/JuliaLinearAlgebra/AlgebraicMultigrid.jl/blob/master/src/smoother.jl#L33-L36 and the docs for CSC https://docs.julialang.org/en/v1/stdlib/SparseArrays/#SparseArrays.nzrange).

termi-official avatar Aug 08 '23 23:08 termi-official