Working 0-1D Chemistry (among other things)
Description
- Adds three new working example cases
nD_perfect_reactor,1D_reactive_shocktube, and1D_inert_shocktube. - Fixes HLL+Advection, Pressure/Energy calculations, and more (for chemistry).
- Removes the horrible
misc/viz.pyscript and instead adds a reusable library in the toolchain for this. See the small, individual,viz.pyfiles for each example case I added. - I put some code I often used to remove case parameters for dimensions above 1 or 2 in one file. Cases can just import
mfc.case_utils. - For all of this to work, I created a
pyproject.tomlfor the toolchain so that it can be installed in the venv. You can alsopip3 install -e toolchain/. - When
./mfc.sh runpasses its internal state to the cases it reads, through CLI arguments, it now does so using the--mfcargument (instead of using positional arguments). This improves the usability of the feature because you no longer have to pass your case a fake first argument when running it usingpython3. - Now that pyrometheus/pyrometheus#87 has been merged, we point directly to pyrometheus's first release on PyPi.
- Refactors the usage of the
i[x,y,z]variables everywhere that made the code confusing to read. - While computing the RHS, we no longer perform the unnecessary conversion of the buffer region's conservatives variables to primitives ones. That is already covered by populating the buffer region from the interior points. This greatly improves the performance of the chemistry code since the Newton solve for temperature would fail in this region due to an invalid conservative state.
- Augments #410 by correctly (and automatically) handling test cases that need a non-standard build of MFC (for features like Chemistry and Analytically defined patches).
- Adds test cases for chemistry:
UUID Trace
───────────────────────────────────────────────
5DCF300C 1D -> Chemistry -> Perfect Reactor
1550B67E 2D -> Chemistry -> Perfect Reactor
E8372F50 3D -> Chemistry -> Perfect Reactor
737E38B9 1D -> Chemistry -> Inert Shocktube
- Fixed the documentation for setting
BOOST_INCLUDEon macOS.
Type of change
- [x] New feature (non-breaking change which adds functionality)
Scope
- [x] This PR comprises a set of related changes with a common goal
How Has This Been Tested?
The three example cases recreate simulations from papers referenced in their respective case.py files and READMEs.
Can you add goldenfiles/tests since we now have examples? Even if they are only for 0D or 1D. this would also help ensure the user has installed boost/pyro and other dependencies correctly.
Yes, will do. There is one caveat, however. ./mfc.sh test would build two versions of MFC, the regular, and one using chemistry + h2o2.yml (the reaction mechanism). Are you fine with this? The user need not worry about this happening, they will just see MFC build "twice". Are you fine with this?
Codecov Report
Attention: Patch coverage is 66.66667% with 99 lines in your changes missing coverage. Please review.
Project coverage is 42.89%. Comparing base (
38a20a7) to head (9b76c7d). Report is 5 commits behind head on master.
Additional details and impacted files
@@ Coverage Diff @@
## master #653 +/- ##
===========================================
- Coverage 54.53% 42.89% -11.65%
===========================================
Files 61 61
Lines 13802 16070 +2268
Branches 1727 1799 +72
===========================================
- Hits 7527 6893 -634
- Misses 5819 8191 +2372
- Partials 456 986 +530
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Can you add goldenfiles/tests since we now have examples? Even if they are only for 0D or 1D. this would also help ensure the user has installed boost/pyro and other dependencies correctly.
Yes, will do. There is one caveat, however.
./mfc.sh testwould build two versions of MFC, the regular, and one using chemistry + h2o2.yml (the reaction mechanism). Are you fine with this? The user need not worry about this happening, they will just see MFC build "twice". Are you fine with this?
Perhaps... what is the reason? For example, as a user it would be convenient to be able to run a chemistry case, and then switch to a 'normal' case without rebuilding (or vice-versa). Is the mechanism only available at compile time for some reason?
Can you add goldenfiles/tests since we now have examples? Even if they are only for 0D or 1D. this would also help ensure the user has installed boost/pyro and other dependencies correctly.
Yes, will do. There is one caveat, however.
./mfc.sh testwould build two versions of MFC, the regular, and one using chemistry + h2o2.yml (the reaction mechanism). Are you fine with this? The user need not worry about this happening, they will just see MFC build "twice". Are you fine with this?Perhaps... what is the reason? For example, as a user it would be convenient to be able to run a chemistry case, and then switch to a 'normal' case without rebuilding (or vice-versa). Is the mechanism only available at compile time for some reason?
This is transparent to the user. Chemistry is a compile-time feature because it needs to invoke Pyrometheus to generate code for a specific mechanism file. The build "slug" system is in place to prevent rebuilds for the exact reason you described. The slug is (partially) a function of the mechanism. When you switch back to running a "regular" MFC case, it will re-use a previous build.
def get_slug(self, case: input.MFCInputFile) -> str:
if self.isDependency:
return self.name
m = hashlib.sha256()
m.update(self.name.encode())
m.update(CFG().make_slug().encode())
m.update(case.get_fpp(self, False).encode())
if case.params.get('chemistry', 'F') == 'T':
m.update(case.get_cantera_solution().name.encode())
return m.hexdigest()[:10]
Can you add goldenfiles/tests since we now have examples? Even if they are only for 0D or 1D. this would also help ensure the user has installed boost/pyro and other dependencies correctly.
Yes, will do. There is one caveat, however.
./mfc.sh testwould build two versions of MFC, the regular, and one using chemistry + h2o2.yml (the reaction mechanism). Are you fine with this? The user need not worry about this happening, they will just see MFC build "twice". Are you fine with this?Perhaps... what is the reason? For example, as a user it would be convenient to be able to run a chemistry case, and then switch to a 'normal' case without rebuilding (or vice-versa). Is the mechanism only available at compile time for some reason?
This is transparent to the user. Chemistry is a compile-time feature because it needs to invoke Pyrometheus to generate code for a specific mechanism file. The build "slug" system is in place to prevent rebuilds for the exact reason you described. The slug is (partially) a function of the mechanism. When you switch back to running a "regular" MFC case, it will re-use a previous build.
def get_slug(self, case: input.MFCInputFile) -> str: if self.isDependency: return self.name m = hashlib.sha256() m.update(self.name.encode()) m.update(CFG().make_slug().encode()) m.update(case.get_fpp(self, False).encode()) if case.params.get('chemistry', 'F') == 'T': m.update(case.get_cantera_solution().name.encode()) return m.hexdigest()[:10]
Yes it seems there is no way to avoid this, so long as Pyro is generating our code on the fly for each chemistry case. So yes I am OK with this, as we should probably have a 'store' of mechanism files that we test with located somewhere in VC
Yes it seems there is no way to avoid this, so long as Pyro is generating our code on the fly for each chemistry case. So yes I am OK with this, as we should probably have a 'store' of mechanism files that we test with located somewhere in VC
I agree. A bunch of standard mechanisms are shipped with Cantera (see https://github.com/Cantera/cantera/tree/main/data). This is why you can run these example cases without downloading a mechanism file.
Benchmarking is failing with
.=++*: -+*+=. | [email protected] [Linux]
:+ -*- == =* . | -------------------------------------------------------
:*+ == ++ .+- |
:*##-.....:*+ .#%+++=--+=:::. | --jobs 24
-=-++-======#=--**+++==+*++=::-:. | --mpi --no-gpu --no-debug --no-gcov --no-unified
.:++=----------====+*= ==..:%..... | --targets pre_process, simulation, and post_process
.:-=++++===--==+=-+= +. := |
+#=::::::::=%=. -+: =+ *: | ----------------------------------------------------------
.*=-=*=.. :=+*+: -...-- | $ ./mfc.sh (build, run, test, clean, count, packer) --help
Benchmarking pre_process, simulation, and post_process (build/benchmarks/e180):
1/4: 5eq_rk3_weno3_hllc @ benchmarks/5eq_rk3_weno3_hllc/case.py
> Log: build/benchmarks/e180/5eq_rk3_weno3_hllc.out
> Summary: build/benchmarks/e180/5eq_rk3_weno3_hllc.yaml
$ ./mfc.sh run /storage/scratch1/6/sbryngelson3/runners/actions-runner-4/_work/MFC/MFC/pr/benchmarks/5eq_rk3_weno3_hllc/case.py --case-optimization --targets pre_process simulation post_process --output-summary /storage/scratch1/6/sbryngelson3/runners/actions-runner-4/_work/MFC/MFC/pr/build/benchmarks/e180/5eq_rk3_weno3_hllc.yaml -c phoenix --gpu -g 0 1 -n 2 -- 1
Error: Failed to load YAML from "/storage/scratch1/6/sbryngelson3/runners/actions-runner-4/_work/MFC/MFC/pr/build/benchmarks/e180/5eq_rk3_weno3_hllc.yaml": [Errno 2] No such file or directory: '/storage/scratch1/6/sbryngelson3/runners/actions-runner-4/_work/MFC/MFC/pr/build/benchmarks/e180/5eq_rk3_weno3_hllc.yaml'
Terminated
mfc: ERROR > main.py finished with a 143 exit code.
mfc: (venv) Exiting the Python virtual environment.
the 'master' run doesn't have this error, only the 'pr' run.
@sbryngelson Thanks for letting me know. Just fixed it.
@henryleberre mind pulling from upstream for 'cleanliness'? I made an update there yesterday and want to see how it worked out.
@henryleberre pull my 'fix cleanliness' PR so it's up to date with master if possible (that's the thing I would like to test).
@henryleberre pull my 'fix cleanliness' PR so it's up to date with master if possible (that's the thing I would like to test).
Yes I did, that was the force-push you saw (a rebase on upstream). This is the workflow step that is causing it to fail: https://github.com/henryleberre/ChemMFC/blob/23df5d6551e7a82217b3259272efe7075d1cf30a/.github/workflows/cleanliness.yml#L99-L103
Testing this is a bit harder because it only appears meaningfully in multi-rank cases. Once you're done, can you test for correctness on a longer 4-rank case, for example, to be sure there isn't a subtle mistake?
Also @henryleberre: It's somewhat unclear what all the different variables really mean (and they don't seem to have comments). I think even after refactor we have 2 or 3 versions of this variable. Can we put comments to say what those are/why they exist?
Testing this is a bit harder because it only appears meaningfully in multi-rank cases. Once you're done, can you test for correctness on a longer 4-rank case, for example, to be sure there isn't a subtle mistake?
Also @henryleberre: It's somewhat unclear what all the different variables really mean (and they don't seem to have comments). I think even after refactor we have 2 or 3 versions of this variable. Can we put comments to say what those are/why they exist?
I did "document" these like this:
! Cell Indices for the interior points (O-m, O-n, 0-p).
type(int_bounds_info) :: idwint(1:3)
!$acc declare create(idwint)
! Cell Indices for the entire domain, including the buffer region.
type(int_bounds_info) :: idwbuff(1:3)
!$acc declare create(idwbuff)
I could maybe have specified these were the proc's local domain. They are only defined in one place now so it's very straightforward to understand. The other indices are module-specific and I'm not sure I have enough context to explain them better than the code does.
Testing this is a bit harder because it only appears meaningfully in multi-rank cases. Once you're done, can you test for correctness on a longer 4-rank case, for example, to be sure there isn't a subtle mistake?
I'm not sure how this is true. Why are the 2-rank test cases not enough?
Testing this is a bit harder because it only appears meaningfully in multi-rank cases. Once you're done, can you test for correctness on a longer 4-rank case, for example, to be sure there isn't a subtle mistake?
I'm not sure how this is true. Why are the 2-rank test cases not enough?
2-rank may be enough, but I'm not sure if our current single test case in the suite is enough.
Testing this is a bit harder because it only appears meaningfully in multi-rank cases. Once you're done, can you test for correctness on a longer 4-rank case, for example, to be sure there isn't a subtle mistake?
Also @henryleberre: It's somewhat unclear what all the different variables really mean (and they don't seem to have comments). I think even after refactor we have 2 or 3 versions of this variable. Can we put comments to say what those are/why they exist?
I did "document" these like this:
! Cell Indices for the interior points (O-m, O-n, 0-p). type(int_bounds_info) :: idwint(1:3) !$acc declare create(idwint) ! Cell Indices for the entire domain, including the buffer region. type(int_bounds_info) :: idwbuff(1:3) !$acc declare create(idwbuff)I could maybe have specified these were the proc's local domain. They are only defined in one place now so it's very straightforward to understand. The other indices are module-specific and I'm not sure I have enough context to explain them better than the code does.
Ah I did not see this part. I mostly saw the module indices. I think those are still confusing. Why do we need any such variables beyond idwint and idwbuff? I can imagine perhaps a couple edge cases but that's about it (like when we do direction reshaping).
The debug cases are failing on this case (perhaps among others, they only run a fraction of the cases in CI):
m_mpi_proxy.fpp:1289: @:ALLOCATE_GLOBAL(ib_buff_send(0:-1 + gp_layers * (m + 2*gp_layers + 1)* (n + 2*gp_layers + 1)* (p + 2*gp_layers + 1)/ (min(m, n, p) + 2*gp_layers + 1)))
m_mpi_proxy.fpp:1301: @:ALLOCATE_GLOBAL(ib_buff_recv(0:ubound(ib_buff_send, 1)))
m_ibm.fpp:106: @:ALLOCATE_GLOBAL(ghost_points(num_gps))
m_ibm.fpp:107: @:ALLOCATE_GLOBAL(inner_points(num_inner_gps))
At line 567 of file /Users/runner/work/MFC/MFC/src/simulation/m_ibm.fpp
Fortran runtime error: Index '1' of dimension 1 of array 'ghost_points' above upper bound of -99999918
Error termination. Backtrace:
Could not print backtrace: executable file is not an executable
Could not print backtrace: executable file is not an executable
Could not print backtrace: executable file is not an executable
Could not print backtrace: executable file is not an executable
#0 0x10345ad2f
#1 0x10345b8d7
#2 0x10345bc8f
#3 0x1022f8bdf
#4 0x10230ad67
#5 0x102555d83
#6 0x1026ef79b
#7 0x1026efdfb
mfc: ERROR > :( /Users/runner/work/MFC/MFC/build/install/7e1418ef14/bin/simulation failed with exit code 2.
Case 725A3D56, F13DF273, and 0F6C4340 from the fraction that ran in CI here
https://github.com/MFlowCode/MFC/actions/runs/11660348744/job/32478811683?pr=653#step:7:5812