Wells created/modified via ActionX not treated correctly in parallel.
Triggered by OPM/opm-grid#564.
It is highly doubtful that modifying wells with ActionX works in parallel for all cases with flow. (Let's hope someone proves me wrong...)
The problem is that the simulator assumes that the information about all possible perforations is available when starting the simulator (Schedule::getWellatEnd()) and that the distribution of wells is static as in "no new perforations appear.
~At the moment my assumption are rather theoretic, meaning that I have not checked the code fully.~
Loadbalance approach that keeps all perforations of a well on one process
This is the default in flow. Before load balancing we inspect the return value of Schedule::getWellsatEnd() and use that information to make sure that no well is split. Unfortunately, neither new perforations added to wells or new wells added via ActionX will be taken into account.
- If an added perforation is not part of the interior on the same process where the well lives, it will at least be skipped on that process. Maybe another well with just that connection might appear on another process without knowledge of this one. Calculations will be wrong.
- If the new perforation is by chance on the same process, we are lucky and safe.
- ~If a new well appears that is part of the interior of several processes, we might not do the correct thing. I am certain about this if running with --matrix-add-well-contributions=true. For the other case we need to be check the code, maybe we are lucky (famous last words).~
- ~If the new well is in the interior of one process, everything probably works.~
- If a new well appears we will run into an assertion in createLocalParallelWellInfo
Loadbalance approach that supports distributed wells
Even in this case we currently assume that we know upfront the names of all wells and for each well which processes might have perforated cells.
- If new perforations appear that are not part of the interior of processes that already have perforations they will be ignored
- If they are part of the interior of processes that has other perforations we are lucky.
- If a new well appears we will run into an assertion in createLocalParallelWellInfo
Sparsity Pattern with --matrix-add-well-contributions=true
This also only uses the information from Schedule::getWellatEnd() and connections for appearing wells and perforations will
be missing. I would expect segmentation faults if connections/wells appear.
Possible solutions
Precompute and make sure that Schedule::getWellatEnd()` or similar holds all information when starting the simulator.
No changes needed in the rest of the simulator. Will work with --matrix-add-well-contributions=true
Make parallel simulator aware of appearing wells/perforations.
- Detect updates
- In this case
- Recompute the well distribution (requires global communication)
- In certain cases we need to adjust the overlap of the grid if we want to support
--matrix-add-well-contributions=true. For false and distributed wells this is not needed AFAIK. (this quite a peace of work) - Recompute sparsity pattern at least for
--matrix-add-well-contributions=true(note that part of our speed comes from preventing this).
Besides the first step this is what needs to be done for adaptive gridding.
Intermediate solution
At least in parallel prevent adding new wells/perforations.
@lisajulia Would you comment here what of this is fixed now and what still might be an issue? Thanks a lot.