adda
adda copied to clipboard
Stopping criterion for iterative solvers
Currently the iterative solvers try to reach the prescribed accuracy. If it
stagnates for a (very) large number of iterations, it stops and produces an
error.
The idea is to adopt a more positive viewpoint:
1) If stagnation is detected, stop iterations, but continue with the
calculation of scattering quantities (but produce a warning). This especially
makes sense, when stagnation happens at relative residual of about 10^-4. A bit
more thinking is required for the case of e.g. orientation averaging.
2) The following idea was inspired by book A. Doicu, T. Trautmann, and F.
Schreier, "Numerical regularization for atmospheric inverse problems,"
Springer, Heidelberg (2010).
Stopping iterative solver (early) can be considered as a regularization
procedure. So for a given DDA problem, there exist an optimal eps,
corresponding to the best accuracy of the final solution. For standard cases,
this optimum is much better (close to machine precision) than the default
stopping criterion and is thus irrelevant. However, for cases with very slow
convergence it may well be the opposite.
So the idea is to modify the stopping criterion through a more detailed
analysis of the previous convergence. Convergence rate can be used to estimate
condition number, which in turn can be used to estimate when it is time to stop
iterations. Alternatively, iterations should be stopped when the convergence
rate significantly slows down.
Anyway, this requires a lot of preliminary math analysis. Especially
problematic is to devise criteria for methods like QMR (without guaranteed
convergence) - it is hard to discriminate natural slowing down (due to large
condition number) and quasi-random (near-)breakdowns due to occurrence of
almost zero in a denominator.
Original issue reported on code.google.com by yurkin
on 5 Mar 2013 at 10:06
Original comment by yurkin
on 3 Aug 2014 at 4:58
- Added labels: Component-Logic
Related to this is the discussion of optimal threshold for a specific problem. Now we use 1e-5 to be safe, but many tests show that 1e-3 or even 1e-2 can be fine in some cases.
/cc @alkichigin
Based on the discussion with @alkichigin, we may want to set the default threshold in ADDA to 1e-4 or 1e-3 to save some computational time for all simulations, but that requires a lot of benchmark runs for a wide range of parameters. And setting default eps in ADDA based on the comprehensive analysis of the scattering problem seems to be overkill (and not very robust). The optimal threshold may also depend on whether only integral scattering quantities are calculated or angle-resolved ones are also required.
This is also related to #92 and #133 (some fast options for cross sections may amplify numerical errors)