opm-simulators icon indicating copy to clipboard operation
opm-simulators copied to clipboard

WIP update group target reduction during local solve

Open totto82 opened this issue 11 months ago • 4 comments

Just for testing.

totto82 avatar Jan 29 '25 08:01 totto82

jenkins build this failure_report please

totto82 avatar Jan 29 '25 08:01 totto82

jenkins build this failure_report please

totto82 avatar Jan 29 '25 08:01 totto82

jenkins build this failure_report please

totto82 avatar Jan 29 '25 12:01 totto82

jenkins build this failure_report please

totto82 avatar Feb 04 '25 14:02 totto82

jenkins build this failure_report please

totto82 avatar Apr 01 '25 14:04 totto82

jenkins build this failure_report please

totto82 avatar Apr 01 '25 15:04 totto82

This has been tested on a simple model and fixes issues related to --check-group-constraints-inner-well-iterations=true.

totto82 avatar Apr 01 '25 15:04 totto82

jenkins build this failure_report please

totto82 avatar Apr 01 '25 17:04 totto82

Most test failures result from differences in time-stepping. Approximately half are minor differences. There are also some failures where the PR gives the correct solution while the master violates some group controls, like GGPR:PROD for 16_9_4a_grpctl_stw_model2.pdf, where the limit is violated at one timestep.

totto82 avatar Apr 01 '25 18:04 totto82

jenkins build this failure_report please

totto82 avatar Apr 02 '25 06:04 totto82

jenkins build this failure_report please

totto82 avatar Apr 03 '25 14:04 totto82

jenkins build this failure_report please

totto82 avatar Apr 03 '25 19:04 totto82

jenkins build this failure_report please

totto82 avatar Apr 03 '25 20:04 totto82

jenkins build this failure_report please

totto82 avatar Apr 04 '25 06:04 totto82

Testing done. From my perspective, this can be merged. We will still have the option of not doing local switching to fall back on, but I think this PR improves on the current state in the master. Even though it could possibly be improved even further.

totto82 avatar Apr 04 '25 13:04 totto82

It is a little worrying that we might have different global well info across different processes, while it is part of the problem of how we do things currently. They will be synchronized at a later stage. With more systematic strategy change related to group control, many troubles we are dealing now should be gone.

GitPaean avatar Apr 04 '25 14:04 GitPaean

This does not seem to make it within the (1-day-delayed) RC1 deadline, and subsequently https://github.com/OPM/opm-simulators/pull/6098 as well. If this is considered to an important contribution for this release then I can probably backport it if necessary ahead of the end of RC1, or possibly for the RC2, but no later than that. Thoughts?

multitalentloes avatar Apr 04 '25 15:04 multitalentloes

I do not deal with release much, but I think you can backport this PR later, it should come in soon after the minor comments addressed.

GitPaean avatar Apr 04 '25 15:04 GitPaean

I do not deal with release much, but I think you can backport this PR later, it should come in soon after the minor comments addressed.

I did some final testing wrt. robustness for running with different number of processes and I noticed some results I didn't like. So we should not merge this before I understand these new results.

totto82 avatar Apr 04 '25 15:04 totto82

I do not deal with release much, but I think you can backport this PR later

I mean technically, yes, but at some point the release manager's judgement/decision will prevent such backporting as those will incur undue testing costs. I think it's a good rule of thumb to assume that PRs which add new features or change simulator behaviour, even if nominally small, will not be considered eligible for backporting beyond RC2.

bska avatar Apr 04 '25 15:04 bska

I do not deal with release much, but I think you can backport this PR later, it should come in soon after the minor comments addressed.

I did some final testing wrt. robustness for running with different number of processes and I noticed some results I didn't like. So we should not merge this before I understand these new results.

I appreciate your testing finding. Just let me know when you figure out what to do with this PR. Updating the group related during a single well's solve itself is a little troublesome/conflicting, while we go with testing and hoping the global solve can fix the inconsistency.

GitPaean avatar Apr 04 '25 20:04 GitPaean

jenkins build this failure_report please

totto82 avatar Apr 07 '25 08:04 totto82

jenkins build this failure_report please

totto82 avatar Apr 07 '25 08:04 totto82

jenkins build this failure_report please

totto82 avatar Apr 08 '25 13:04 totto82

jenkins build this failure_report please

totto82 avatar Apr 09 '25 06:04 totto82

jenkins build this failure_report please

totto82 avatar Apr 10 '25 13:04 totto82

jenkins build this failure_report please

totto82 avatar Apr 14 '25 06:04 totto82

Most of the testing is done, and this can be merged. But since it impacts almost all cases, mostly minor impacts though, I don't think we should include it in the release. Users still have the option of setting-- check-group-constraints-inner-well-iterations=false if they stumble on rare(?) cases where local group constraint checking is inconsistent and could lead to oscillations.

totto82 avatar Apr 22 '25 09:04 totto82

jenkins build this failure_report please

totto82 avatar Apr 24 '25 06:04 totto82