Jan Philipp Thiele
Jan Philipp Thiele
I investigated further and there were indeed some missing entries in the sparsity pattern. Now single core MPI runs work but after adding a second core distribute fails after assembly...
After further investigation I found the following somehow odd behaviour: - when using dealii::SparsityPattern in the SparseMatrix reinit with locally owned index sets and mpi_comm the diagonal only entries of...
I also came across this recently. I'm not sure whether we can fix this in deal.II as either Mumps or Amesos keeps a pointer to the original matrix, so the...
> I think that's the key question: all of the `TrilinosWrappers` classes build on Epetra, but we don't have a complete replacement for these classes that would build on Tpetra....
As I just encountered this as well I had a quick look at Epetra and our wrappers. If I am not completely mistaken we just ignored that Epetra was signed...
Very nice! A few questions for the direct solver tests: - will we be able to add a full setup to the test server, so we can test whether all...
> Yes, with have a number of tests that have outputs like `tests/trilinos/precondition_muelu_smoother.with_trilinos_with_muelu=on.with_64bit_indices=off.output` that express requirements. Whether it's worth testing that we get errors on unsupported configurations is perhaps debatable,...
With the `swap` function implemented in #16602 at least the SolverCG test should pass since I was able to compute the reference solution for direct_solver_2.cc with it. If you want...
We now have #16625
I ran into a similar issue when trying to get Trilinos 13.2 with ShyLU_DD to work with our Epetra Wrappers. MueLu supports Epetra with the same function Convert_Epetra_CrsMatrix_ToXpetra_CrsMatrixWrap that I...