libmesh
libmesh copied to clipboard
ExodusII::write_added_sides()
This replaces #3188 - it starts with that branch, but adds data output as well.
I still have more testing to do before merging, but I'm again at a stage where double-checking that CI is happy would be good.
Job Coverage on 1856534 wanted to post the following:
Coverage
9d5514 | #3233 185653 | ||||
---|---|---|---|---|---|
Total | Total | +/- | New | ||
Rate | 55.98% | 56.11% | +0.13% | 83.16% | |
Hits | 44841 | 45061 | +220 | 311 | |
Misses | 35262 | 35249 | -13 | 63 |
Warnings
- New new line coverage rate 83.16% is less than the suggested 90.0%
This comment will be updated on new commits.
Looks like an actual failure in misc_ex5?
***************************************************************
* Running Example miscellaneous_ex5:
* mpiexec -np 2 ./example-dbg --n_threads=2
***************************************************************
Beginning Solve 0
Number of elements: 219
assembling elliptic dg system... done
System has: 768 degrees of freedom.
Linear solver converged at step: 31, final residual: 4.91472e-12
L2-Error is: 0.00666744
Beginning Solve 1
Number of elements: 827
assembling elliptic dg system... done
System has: 2896 degrees of freedom.
Linear solver converged at step: 46, final residual: 7.22623e-12
L2-Error is: 0.00264921
Beginning Solve 2
Number of elements: 3003
assembling elliptic dg system... done
System has: 10512 degrees of freedom.
Linear solver converged at step: 65, final residual: 1.95985e-11
L2-Error is: 0.0016323
Assertion `num_real_nodes == num_nodes' failed.
num_real_nodes = 21024
num_nodes = 0
[1] ../src/mesh/exodusII_io.C, line 2096, compiled Apr 19 2022 at 15:11:19
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 1
FAIL: run.sh
===============================================================
This example calls ExodusII_IO::write_discontinuous_exodusII()
though, so maybe that makes sense?
That's definitely an actual failure in misc_ex5, but I'm still trying to replicate it.
Never mind; it's easy to replicate in parallel.
And easy to fix; it wasn't an actual failure, just an assertion getting called in parallel that should have only been called on proc 0.
That MOOSE thing is python acting up again, but the complex configuration's an actual bug in the new unit tests.
Well, that was scary. After I restored the regular test instead of the debugging test, it started failing on Hexes. Turned out to be a bug in the test rather than the implementation, though. After I finish hammering it on more processors and distributed meshes and so on I'll push, then I'll merge once CI is happy too.