Alternative Random Ray Volume Estimators
Problem
As discussed in more depth in the OpenMC docs, the "simulation averaged" volume estimator that OpenMC currently uses as part of the random ray solver has a few issues. While it performs great on k-eff eigenvalue problems, it was found by @ChasingNeutrons that this estimator performs very poorly on certain classes of fixed source problems that feature strong external sources located inside void-like regions. Generally, the flux estimate in the low $\Sigma_t$ regions with a strong source is garbage, and can become negative and/or blow up to many orders of magnitude higher than expected. This is a serious issue as sources placed within a near vacuum are likely to be common use cases for fusion device simulations.
The Fix
To remedy this issue, this PR introduces a variety of alternative flux estimators and a new policy for deciding what to do if a source region (by stochastic chance) is not hit by any ray in a particular iteration.
The long story short is that the new defaults implemented in this PR will provide for good stability and accuracy across both k-eigenvalue and fixed source simulations, even under pathological conditions, and even when FSR miss rates are extremely high (with up to 10% of FSRs being missed each iteration).
Alternative volume estimators
Currently there is only one volume estimator available in OpenMC, the "simulation averaged" estimator. This PR adds four options for the user to select from:
- Simulation Averaged (old default)
- Naive
- Segment Corrected
- Hybrid (new default)
More information on each of these estimators has been added to the methods section of the docs, and a practical run down of their pros and cons has been added to the user guide. To summarize though, the hybrid estimator (the new default) basically applies the simulation averaged estimator to all FSRs by default, as this is typically the most accurate estimator. For individual FSRs that contain a user-defined external source term, the hybrid option applies the naive estimator, which remedies negative/exploded flux terms in these specific cells.
Missed FSR Policy
Previously, when a FSR was missed, the transport term in the flux estimator for iteration $n$:
$\phi^n = \frac{Q}{\Sigma_t} + \frac{\sum\limits_{r=1}^{N_i} \Delta \psi_{r}}{\Sigma_t \sum\limits_{r=1}^{N_i} \ell_r}$
is going to be undefined as we have a 0/0 term. However, as the second term is representing the transport in/out of the source region, we can set that term to zero as no ray crossings means there is no streaming happening. So all that's left when estimating the scalar flux for the missed FSR that iteration is the source term:
$\phi_{missed}^n = \frac{Q}{\Sigma_t}$
The assumption of no streaming for a missed FSR is usually acceptable as the streaming and source terms are often on the same order of magnitude. However, this causes some major issues if there is a strong external source term and a very small $\Sigma_t$. Physically, this case implies that nearly all the source will be transported out of the cell, and only a tiny fraction should have an interaction there. If any rays were to pass through, then this would carry the source term away from the cell via the ray, but if completely missed that iteration then the entire source is assumed to be absorbed in the FSR which explodes the flux estimate.
To fix this issue, the new policy in effect now uses the standard estimator of
$\phi_{missed}^n = \frac{Q}{\Sigma_t}$
for regular cells without any external source terms within them, and
$\phi_{missed}^n =\phi_{missed}^{n-1}$
for cells with an external source term. Basically, if we miss a FSR in a given iteration and it has an external source, we just use the previous iteration's estimate for the flux in that cell. This of course adds some degree of correlation into the simulation, but with a reasonably small miss rate (on the order of a few percent) this seems like it should be hard to detect. To test that hypothesis though, we validated the new estimator and missed FSR policy as below.
Validation
This PR involved a mountain of data collection to validate that it's all working, but as much of this data will likely be used for an upcoming paper I'd rather not include it directly as part of this PR writeup. I'm happy to share the data privately -- just ask. For now, I'll just give the topline results on the 3 region cube. This is a simple monoenergetic problem of a cube with three concentric cubic regions:
The innermost region is near void (with $\Sigma_t \approx 3 \times 10^{-5}$) and contains the external isotropic source term, the middle region is void (with $\Sigma_t \approx 3 \times 10^{-4}$), and the outer region of the cube is an absorber (with $\Sigma_t = 1.0$). Key results for the new estimators and policy are given below under extreme conditions, where the ray density is extremely coarse, resulting in about a 10% FSR miss rate per iteration:
The default combination is at the bottom of the table, which gives good results in all regions. I've also tested the estimators on a variety of other problems and am happy to share the data privately.
One open question is whether or not it is worthwhile to even include the segment corrected estimator or not. I haven't observed a case yet where it is a better choice than the default hybrid approach, but given that it protects against negative fluxes perhaps it might have a niche at some point?
Change in Miss Rate Warning
Previously, we were warning users to increase ray density if they were getting a miss rate above 0.01%. In light of all the data I've collected, I'm confident now that FSR miss rates up to 10% with the default policy are not going to impart significant bias on the simulation. Conservatively, I've moved the warning level up from 0.01% to 1%.
This PR closes #2990.
Checklist
- [x] I have performed a self-review of my own code
- [x] I have run clang-format (version 15) on any C++ source files (if applicable)
- [x] I have followed the style guidelines for Python source files (if applicable)
- [x] I have made corresponding changes to the documentation (if applicable)
- [x] I have added tests that prove my fix is effective or that my feature works (if applicable)
I will update the broken tests etc after #3061 is merged.
This PR was submitted before linear source random ray was added to the code, so fell out of date. I've updated this PR to include support for the new estimators + miss policy for linear sources. A few nots:
-
A new python test for linear sources with each of the volume estimators was added.
-
I removed the "segment corrected" estimator option. This estimator seems like it will not be likely to be very useful, yet was going to add some significant extra complexity to the code in terms of several branch statements throughout, so seems like it is best to leave out for now. In particular, since the hybrid estimator option appears to work so well, there is little motivation to add the segment corrected option in.
-
Currently in OpenMC, linear sources are not computed in the first 2 iterations (only the flat source component is used) so as to improve stability given the very high uncertainty in the spatial moments and centroids in the first few iterations. It was found that in cases of low ray density (e.g., where the miss rate is high, ~10% of source regions missed each iteration), use of linear sources would result in instabilities and bias unless a very high number of inactive batches were used. Increasing this threshold to use only flat source for the first 10 iterations rather than just the first 2 appears to have totally fixed these instabilities, even with miss rates up to ~20%. As such, this change was made in this PR so as to allow for running validation studies correctly without stability concerns on high miss rate problems. Notably, this ended up changing the k-eff linear source test values slightly, as the first few batches have slightly different moments so the unconverged test solutions are slightly off. Additionally, as at least 10 inactive batches are now required to start calculating linear source contributions, linear source tests have increased their inactive + active batches from 5/5/ to 20/20, so as to allow for testing of the linear source moment computations.
-
To validate things on linear source, I repeated the tests in this PR on the three region cube. Results appear as expected, with the hybrid policy delivering low errors even with 10% miss rates on the pathological three region cube problem:
I don't seem to have mentioned this in the PR writeup yet, but for completeness, this PR also changes the new and old scalar flux vectors to be doubles rather than floats. This change was made as it greatly reduces roundoff error, as hundreds/thousands of rays may contribute to the new scalar flux estimate in a cell each iteration. In these cases, the order that these operations happen greatly impacts roundoff error in FP32, such that tallies tend to see changes approaching 0.1% which is more than we want to deal with for regression testing. Switching to FP64 brings these roundoff errors way down. Generally, much of the neutronics is still done in FP32 (e.g., angular fluxes are FP32, as is the source vector), as these areas are not subject to different operation ordering depending on parallelism etc.