fix: periodic and mpi
Description
This PR fixes a bug that was introduced earlier for the MPI usage and the periodic. The ghost cells were not set properly.
The new implementation of the update_sub_mesh_impl method in the mr/mesh.hpp file correctly computes all the ghost cells used during the adaptation step and the integration of the numerical scheme.
We can see this by observing that we use exactly the same algorithm to add the ghost cells in cases with one domain without periodic conditions, one domain with periodic conditions, and several domains with or without periodic conditions. This makes sense, as for the subdomain or the periodic parts, we simply translate a domain with true cells and add the ghost cells to the subdomain accordingly. The procedure is the same in both cases.
In a future version, AMR mesh will be updated using the same idea.
One of the key elements of this PR is the intense use of the expand operator, which adds one or more cells in all directions of a subset. Unfortunately, the subset implementation is not optimized for this kind of operator, as many unions are involved. The subset implementation is currently being rewritten and should reduce the time required to compute the expanded subset.
Related issue
The MPI version, both with and without periodic, was broken. The graduation method was incorrect because it did not take into account all the levels that could potentially add new cells. And finally, the constraint defined in the PR https://github.com/hpc-maths/samurai/pull/320 was not parallel. This PR fixes this issue.
How has this been tested?
To validate the MPI version of samurai and to prevent any regression, a test case is added in demos/FiniteVolume called burgers_os_2d_mpi.cpp. The main idea is to replicate the initial solution on each subdomain and to add periodic conditions. Then, we solve the Burgers equation using an OSMP scheme, checking that the mesh on each subdomain is consistent. This example can also be used to measure the weak scaling in samurai.
Code of Conduct
By submitting this PR, you agree to follow our Code of Conduct
- [x] I agree to follow this project's Code of Conduct
I Tested this PR on two cases (advection-2d and burgers in MRA mode) with MPI from 1 to 16 ranks and
- I did not identify any bugs for now
- I noticed an improvement in scalability