Update on adaptive time stepping for sub-grid bubbles
Description
This PR includes some updates on adaptive time stepping for sub-grid bubbles to improve the accuracy of the scheme. Also, I tried to make the subroutine s_compute_bubble_source be more concise by modularizing some parts into a few separate subroutines.
This PR basically does not fix any bug nor introduce new features.
Type of change
Please delete options that are not relevant.
- [x] Improves performance and code readability
Scope
- [x] This PR comprises a set of related changes with a common goal
If you cannot check the above box, please split your PR into multiple PRs that each have a common goal.
How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
- [ ] Test A
- [ ] Test B
Test Configuration:
- What computers and compilers did you use to test this:
Checklist
- [ ] I have added comments for the new code
- [ ] I added Doxygen docstrings to the new code
- [ ] I have made corresponding changes to the documentation (
docs/) - [ ] I have added regression tests to the test suite so that people can verify in the future that the feature is behaving as expected
- [ ] I have added example cases in
examples/that demonstrate my new feature performing as expected. They run to completion and demonstrate "interesting physics" - [ ] I ran
./mfc.sh formatbefore committing my code - [ ] New and existing tests pass locally with my changes, including with GPU capability enabled (both NVIDIA hardware with NVHPC compilers and AMD hardware with CRAY compilers) and disabled
- [ ] This PR does not introduce any repeated code (it follows the DRY principle)
- [ ] I cannot think of a way to condense this code and reduce any introduced additional line count
If your code changes any code source files (anything in src/simulation)
To make sure the code is performing as expected on GPU devices, I have:
- [ ] Checked that the code compiles using NVHPC compilers
- [ ] Checked that the code compiles using CRAY compilers
- [ ] Ran the code on either V100, A100, or H100 GPUs and ensured the new feature performed as expected (the GPU results match the CPU results)
- [ ] Ran the code on MI200+ GPUs and ensure the new features performed as expected (the GPU results match the CPU results)
- [ ] Enclosed the new feature via
nvtxranges so that they can be identified in profiles - [ ] Ran a Nsight Systems profile using
./mfc.sh run XXXX --gpu -t simulation --nsys, and have attached the output file (.nsys-rep) and plain text results to this PR - [ ] Ran an Omniperf profile using
./mfc.sh run XXXX --gpu -t simulation --omniperf, and have attached the output file and plain text results to this PR. - [ ] Ran my code using various numbers of different GPUs (1, 2, and 8, for example) in parallel and made sure that the results scale similarly to what happens if you run without the new code/feature
Codecov Report
Attention: Patch coverage is 83.63636% with 9 lines in your changes missing coverage. Please review.
Project coverage is 58.41%. Comparing base (
2617ad8) to head (2efaec1).
| Files | Patch % | Lines |
|---|---|---|
| src/simulation/m_bubbles.fpp | 86.00% | 3 Missing and 4 partials :warning: |
| src/simulation/m_time_steppers.fpp | 50.00% | 2 Missing :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## master #408 +/- ##
==========================================
+ Coverage 58.37% 58.41% +0.04%
==========================================
Files 57 57
Lines 14440 14453 +13
Branches 1892 1892
==========================================
+ Hits 8429 8443 +14
Misses 5449 5449
+ Partials 562 561 -1
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Is this ready for review @lee-hyeoksu?
If so, can you add an example in examples/ that uses this feature and document it in the documentation (unless they already are?).
@sbryngelson This feature already has an example case 0d_bubblecollapse_adap and the documentation is online. This feature is actually not new but just improved by this PR.
Do you have any idea why Benchmark fails on Phoenix? It seems like it's beed canceled due to time limit but I couldn't find details.
Run (cd pr && bash .github/workflows/phoenix/submit.sh .github/workflows/phoenix/bench.sh cpu) &
(cd pr && bash .github/workflows/phoenix/submit.sh .github/workflows/phoenix/bench.sh cpu) &
(cd master && bash .github/workflows/phoenix/submit.sh .github/workflows/phoenix/bench.sh cpu) &
wait %1 && wait %2
shell: /usr/bin/bash -e {0}
env:
ACTIONS_RUNNER_FORCE_ACTIONS_NODE_VERSION: node16
ACTIONS_ALLOW_USE_UNSECURE_NODE_VERSION: true
Submitted batch job 655415
Submitted batch job 655416
Error: The operation was canceled.
Georgia Tech | Phoenix (NVHPC) (cpu) The job running on runner phoenix-2 has exceeded the maximum execution time of 360 minutes.
There's a problem with Phoenix at the moment. I'm sure it will be fixed shortly.
Hi @lee-hyeoksu -- everything seems in order. There is a problem with Frontier. Perhaps @wilfonba or @anandrdbz can fix it. I also messaged them on Slack.
@lee-hyeoksu. The only oddity I spotted was that these lines (https://github.com/lee-hyeoksu/MFC-Caltech/blob/7d571f1496444e51d6fcf1fd99032bdeb46686e0/src/simulation/m_time_steppers.fpp#L275-L280) are duplicates of these lines (https://github.com/lee-hyeoksu/MFC-Caltech/blob/7d571f1496444e51d6fcf1fd99032bdeb46686e0/src/simulation/m_time_steppers.fpp#L176-L181). I'm not 100% sure this is the problem, but allocating memory with Cray compilers can be finicky.
@wilfonba Yeah It seems like a redundancy so I removed one of them. Let's see if now it works. Thank you for finding it.
@sbryngelson @wilfonba Frontier test failed with exit code 135. Do you have any ideas?
I see. I'm rerunning it. It didn't finish all the 3D jobs, not sure why.