ginkgo
ginkgo copied to clipboard
Input tolerance vs output error
Hello, I am running ginkgo/1.4.0 using Conan-center, on Mac. I would like to have a full control on the tolerance of the solver.
I have setting up a logger doing the following:
auto iter_stop = gko::share(
gko::stop::Iteration::build().with_max_iters(1000).on(exec));
auto tol_stop = gko::share(gko::stop::ResidualNorm<double>::build()
.with_reduction_factor(1e-20)
.on(exec));
Then, I am running a solver that way:
using bj = gko::preconditioner::Jacobi<ValueType, IndexType>;
auto solver_gen =
cg::build()
.with_criteria(iter_stop, tol_stop)
// Add preconditioner, these 2 lines are the only
// difference from the simple solver example
.with_preconditioner(bj::build()
.with_max_block_size(16u)
.with_storage_optimization(
gko::precision_reduction::autodetect())
.on(exec))
.on(exec);
At the end, I would like to make sure the output error is lower than the input tolerance. If I am not mistaken, it seems there are two ways to do it:
auto impl_res = gko::as<real_vec>(logger->get_implicit_sq_resnorm());
std::cout << "Implicit residual norm (r):\n";
std::cout << std::sqrt(impl_res->at(1, 0)) << std::endl;
- Something similar to
auto one = gko::initialize<vec>({1.0}, exec);
auto neg_one = gko::initialize<vec>({-1.0}, exec);
auto res = gko::initialize<real_vec>({0.0}, exec);
A->apply(lend(one), lend(x), lend(neg_one), lend(b));
b->compute_norm2(lend(res));
The second one has a runtime error with 1.4.0 on Mac. With the first one, the value is much much lower than 1e-20.
What would you recommend? Thank you. Best regards.
Note: I can attach the full code if needed
If you set the residual tolerance to 1e-20, then the solver will try to get an error of atleast 1e-20, but it might be lower.
For CG, methods 1. and 2. should be equivalent, because the internal residual norm of CG should track the backward error calculated through $|| b - A*x ||_2$.
For your runtime error for 2., I think if exec is a GPU executor and then try to print that out, then you have memory access issues.
also, what you are reporting is not the residual norm, but the square of the energy norm $||x||_A$ (since it has the suffix _sq
)
I have some data where I put a tolerance of 1e-20 and ||b - A * x||_2 is something like 1e-12. Should we divide by ||b||?
Another issue I have is that I have a right-hand side with only zeroes, and it takes the maximum number of iterations to get the solution with only zeroes.
Thanks for the details. On your first question: You are using a relative residual norm stopping criterion, so we are looking at $||b - Ax||_2 / ||b||_2$ by default. If you need an absolute residual norm, you can use .with_baseline(gko::stop::mode::absolute)
. Note that it may not be possible to reach the desired accuracy in any case, since it is independent of the magnitude of x and b and thus their absolute accuracy.
On the second question: Yes, as with the relative residual norm, if your initial residual is 0, you divide by zero. We will add a fix for that one though!