Activate WELPI and WPIMULT for use in Pyaction
To activate the WELPI keyword for the use in a PyAction, I had to add a way of communicating the old well production indices from the ActionHandler (i.e. the Well Model) back to the Schedule. For this, I have added the function setWellPIMap to the Schedule object, setting the attribute m_wellPIMap, a std::unordered_map<std::string, double>, of the Schedule . This map is set from the Action Handler before pending PyActions are executed.
Then, to further simplify the code, I have removed this map from the arguments of the handleKeyword, applyAction and iterateScheduleSection (there it was called target_wellpi), since these functions can now use the m_wellPIMap. Then, I have removed a call that was done before for each ActionX: This call fetched such a map for the matching wells of each ActionX, but now, since we have the m_wellPIMap, these calls are not needed anymore.
It would be interesting to have a benchmark for this, possibly time and memory wise.
jenkins build this opm-simulators=5467 opm-tests=1193 please
jenkins build this opm-simulators=5467 opm-tests=1193 please
This PR contains the WELPI kw (#4133) and also WPIMULT, same for other two PRs https://github.com/OPM/opm-simulators/pull/5467 and https://github.com/OPM/opm-tests/pull/1193.
jenkins build this opm-simulators=5467 opm-tests=1193 please
jenkins build this opm-simulators=5467 opm-tests=1193 please
jenkins build this opm-simulators=5467 opm-tests=1193 please
@blattms: I've updated the description, also we can discuss if this really is the best approach before merging! Thanks!
jenkins build this opm-simulators=5467 opm-tests=1193 please
benchmark please
No realize that this benchmark will not work as it needs the other branches. I'll try with the correct repos, but I don't know whether the cases tested even use ACTIONX/PYACTION.
benchmark opm-simulators=5467 opm-tests=1193 please
Benchmark result overview:
| Test | Configuration | Relative |
|---|---|---|
| opm-git | OPM Benchmark: drogon - Threads: 1 | 0.997 |
| opm-git | OPM Benchmark: drogon - Threads: 8 | 1.23 |
| opm-git | OPM Benchmark: punqs3 - Threads: 1 | 1 |
| opm-git | OPM Benchmark: punqs3 - Threads: 8 | 1.027 |
| opm-git | OPM Benchmark: smeaheia - Threads: 1 | 0.994 |
| opm-git | OPM Benchmark: smeaheia - Threads: 8 | 1.135 |
| opm-git | OPM Benchmark: spe10_model_1 - Threads: 1 | 1.001 |
| opm-git | OPM Benchmark: spe10_model_1 - Threads: 8 | 1.003 |
| opm-git | OPM Benchmark: flow_mpi_extra - Threads: 1 | 0.971 |
| opm-git | OPM Benchmark: flow_mpi_extra - Threads: 8 | 1.145 |
| opm-git | OPM Benchmark: flow_mpi_norne - Threads: 1 | 0.988 |
| opm-git | OPM Benchmark: flow_mpi_norne - Threads: 8 | 1.062 |
| opm-git | OPM Benchmark: flow_mpi_norne_4c_msw - Threads: 1 | 0.984 |
| opm-git | OPM Benchmark: flow_mpi_norne_4c_msw - Threads: 8 | 1.067 |
- Speed-up = Total time master / Total time pull request. Above 1.0 is an improvement. *
View result details @ https://www.ytelses.com/opm/?page=result&id=2545
Benchmark result overview: Test Configuration Relative opm-git OPM Benchmark: drogon - Threads: 1 0.997 opm-git OPM Benchmark: drogon - Threads: 8 1.23 opm-git OPM Benchmark: punqs3 - Threads: 1 1 opm-git OPM Benchmark: punqs3 - Threads: 8 1.027 opm-git OPM Benchmark: smeaheia - Threads: 1 0.994 opm-git OPM Benchmark: smeaheia - Threads: 8 1.135 opm-git OPM Benchmark: spe10_model_1 - Threads: 1 1.001 opm-git OPM Benchmark: spe10_model_1 - Threads: 8 1.003 opm-git OPM Benchmark: flow_mpi_extra - Threads: 1 0.971 opm-git OPM Benchmark: flow_mpi_extra - Threads: 8 1.145 opm-git OPM Benchmark: flow_mpi_norne - Threads: 1 0.988 opm-git OPM Benchmark: flow_mpi_norne - Threads: 8 1.062 opm-git OPM Benchmark: flow_mpi_norne_4c_msw - Threads: 1 0.984 opm-git OPM Benchmark: flow_mpi_norne_4c_msw - Threads: 8 1.067
* Speed-up = Total time master / Total time pull request. Above 1.0 is an improvement. *View result details @ https://www.ytelses.com/opm/?page=result&id=2545
Hm.. the main change performance-wise is that instead of fetching the well productions indices for each action per step, we fetch those now once per step (only if there are pending actions or pyactions, if there are no actions, we don't fetch anything at all). Can this change really produce such high differences, like OPM Benchmark: drogon - Threads: 8 : 1.23 OPM Benchmark: flow_mpi_extra - Threads: 1: 0.971 ?
Let's rerun. A serial run should not be affected by this.
benchmark opm-simulators=5467 opm-tests=1193 please
jenkins build this opm-simulators=5467 opm-tests=1193 please
@blattms: what is the further plan with this?
jenkins build this opm-simulators=5467 please
benchmark opm-simulators=5467 please
jenkins build this opm-simulators=5467 please
jenkins build this opm-simulators=5467 please
jenkins build this opm-simulators=5467 please
jenkins build this opm-simulators=5467 please
jenkins build this opm-simulators=5467 please
jenkins build this opm-simulators=5467 please
I haven't looked at the details here, I'll do that later, but I would really prefer that we not add the proposed data member since the Schedule is supposed to contain information that's inferred from the model input only. Of course the WELPI keyword is rather strongly coupled to the dynamic simulator state since we need the current PI (or II) values in order to interpret the input data, but I still think we should aim to keep the input and the dynamically calculated information distinct. Is there really no other way of supplying the PI values than through this side channel?
I haven't looked at the details here, I'll do that later, but I would really prefer that we not add the proposed data member since the
Scheduleis supposed to contain information that's inferred from the model input only. Of course theWELPIkeyword is rather strongly coupled to the dynamic simulator state since we need the current PI (or II) values in order to interpret the input data, but I still think we should aim to keep the input and the dynamically calculated information distinct. Is there really no other way of supplying the PI values than through this side channel?
Yes, I know this is not ideal, yet I need to communicate back the well production indices back to the Schedule. For an ACTIONX, this is possible through passing such a map in the call applyAction (https://github.com/OPM/opm-simulators/pull/5467/files#diff-ebf1a517eb80b2cd88977997fa304cc0c6bd1750afdf14a8186ba2a763d1dedfR240), yet for a Pyaction there is no such call:
- A Pyaction is invoked by https://github.com/OPM/opm-common/blob/780ce15c7d6eb59c29ee1a89c72347e88297e9f5/opm/input/eclipse/Schedule/Schedule.cpp#L1984
- The actual python code is executed by https://github.com/OPM/opm-common/blob/780ce15c7d6eb59c29ee1a89c72347e88297e9f5/opm/input/eclipse/Python/PyRunModule.cpp#L85
- Without reintroducing something like the actionx_callback function it's not possible to access the simulator from the PyRunModule anymore.
Hence, my approach was to give a map containing all PI values to the Schedule before the Pyactions and ActionX's get executed, then also this map does not need to be passed around in all applyAction functions like before.
It might also be a (possibly wiser) option to move all the Pyaction/Python classes to opm-simulators and make them aware of the ActionHandler, yet I have not estimated other implications this would have, since it's a rather large change.
jenkins build this opm-simulators=5467 please
@bska: How do you suggest to move on with this? Thanks!
jenkins build this opm-simulators=5467 please