openmc
openmc copied to clipboard
Depletion restart with mpi
Description
This PR fixes an inconsistency when restarting a depletion simulation with mpi enabled, due to a call to openmc/deplete/coupled_operator._load_previous_results() method from openmc/deplete/abc._get_start_data(), which double the size of depletion results file and start appending new ones from there.
To reproduce simply run:
mpirun -n 2 --bind-to numa --map-by numa python -m mpi4py run_depletion.py
and
mpirun -n 2 --bind-to numa --map-by numa python -m mpi4py restart_depletion.py
from the pincell_depletion example
Checklist
- [x] I have performed a self-review of my own code
- [ ] I have run clang-format (version 15) on any C++ source files (if applicable)
- [x] I have followed the style guidelines for Python source files (if applicable)
- [ ] I have made corresponding changes to the documentation (if applicable)
- [ ] I have added tests that prove my fix is effective or that my feature works (if applicable)
Can you add some comments to the code as to why this division by two is necessary only the in the MPI case? Also, why is the length of this doubled when using MPI?
Hi @gridley ,
Is this bit here in _load_previous_results method:
`if comm.size != 1:
prev_results = self.prev_res
self.prev_res = Results()
mat_indexes = _distribute(range(len(self.burnable_mats)))
for res_obj in prev_results:
new_res = res_obj.distribute(self.local_mats, mat_indexes)
self.prev_res.append(new_res)`
that appends `new_res` objects twice.
@drewejohnson @gridley , I think I've figured out what's happening here:
self.prev_res = Results() does not reinitialize to a null Results class (as the default argument in results.py is filename='depletion_results.h5'), and that's why it's size got double after the for loop.
I've simply set the depletion result filename argument to None and now works as expected, without having to do the strange division later on.
@drewejohnson Any further comments or do you approve of the change here?
Thanks for your patience!