Peter R.
Peter R.
You can skip the "-autoclean" and manually subtract one node (56686->56685) from mesh.header and remove the last row in mesh.nodes and this gives you exactly the same output. So the...
I applied the patch currently for the full tests and created an elmerice test that is run on branch "elmerice" automatically (with max 4 mpi tasks). The thing is that...
I added just elmerice tests, currently 10 fail on gcc. https://github.com/ElmerCSC/elmerfem/actions/runs/10401394765/job/28803865353
The reason is that there are many failing elmerice tests currently. See https://github.com/ElmerCSC/elmerfem/actions/runs/10401394765/job/28803865353#step:8:1047 This being the case all the tests would fail and we loose the success of tests as...
At least one nice thing with the redirect is that we can go to the tests directory and study the std out & err. Each test has them in its...
This is not a practical limitation since all Elmer/Ice users probably need MPI.
Sorry, a bug sneaked in. Most of these should be fixed now.
When I added the label "quick" I sorted the tests on some of my platforms by time. For 10s label "slow". So there is no rocket science behind. Computers get...
MPI is historically the way to go with Elmer (and many other FEM codes). Once the problem is distributed the parallelization is trivial (meaning all the physics, assembly etc.). Only...
I am closing some old build related issues since most likely they are not relevant any more.