openmc
openmc copied to clipboard
Compute material volumes in mesh elements based on raytracing
Description
PR #2802 introduced a capability to compute material volumes within mesh elements. This feature worked by looping over each mesh element, sampling a bunch of points within the element and then doing a "find cell" operation on each point. I've found recently that while this works well for meshes with a modest amount of elements, when the mesh resolution becomes fine, it can become extremely slow.
This PR completely overhauls the algorithm for computing material volumes by raytracing across a bounding box that contains the mesh. This has the benefit of accumulating volume information much faster than doing "find cell" operations on random points. The method works as follows:
- Determine the axis-aligned bounding box of the mesh
- Starting on the "left" surface of the bounding box (minimum in the y-z plane), shoot rays in the direction (1,0,0), stopping at each material boundary. Given the track from one point to a material boundary, it uses the
Mesh::bins_crossedmethod to determine which mesh elements were crossed and what the corresponding track lengths over the elements were. Note that rays are spaced uniformly over the starting surface. - Once all rays are done, renormalize material volumes based on the known total volume of each element
The interface here now has three parts:
- A C API function
openmc_mesh_material_volumes - Python binding in the openmc.lib module as
openmc.lib.Mesh.material_volumes - A
material_volumesmethod on the normalopenmc.Meshclass that usesopenmc.libunder the hood
In addition to changing the underlying method for material volume in mesh calculations, I've updated the return type as well to a new MeshMaterialVolumes class that provides the result data in multiple modalities. First, it can be indexed by material ID which gives an array whose size is equal to the number of mesh elements for which each value is the volume of that material in each element. If the user wants all material volumes for a single element, they can request this with the by_element(mesh_index) method. Here's an example of what this would look like in practice
>>> vols = mesh.material_volumes(...)
>>> vols
{1: <4 nonzero volumes>
2: <3 nonzero volumes>
3: <5 nonzero volumes>}
>>> vols[1]
array([0. , 0. , 0. , 0. , 0. ,
0.39269908, 0.39269908, 0. , 0.39269908, 0.39269908])
>>> vols.by_element(3)
[(3, 0.7853981633974483)]
In addition to providing user convenience, the MeshMaterialVolumes class stores the resulting material volumes very compactly. This required having a fixed value for the maximum number of materials in a given element, but the implementation is smart enough to reallocate to a larger value if the original value was insufficient.
One limitation of the new implementation is that the mesh must be fully contained in the geometry. I couldn't think of a good way around this because if you are raytracing, you have to start within the geometry.
Performance
I've been playing around with the new implementation on a few models and I'm seeing very good results. For example, on the FNG dose rate benchmark problem from SINBAD, I get the following timings:
- Point sampling: 50x50x50 regular mesh, 10 samples per element = 8.084 sec
- Point sampling: 100x100x100 regular mesh, 10 samples per element = 82.99 sec
- Ray tracing: 50x50x50 regular mesh, 10 rays per element = 0.654 sec
- Ray tracing: 100x100x100 regular mesh, 100 rays per element = 4.788 sec
It's not quite an apples to apples comparison, but choosing the number of rays equal to the number of point samples is the best I could think of. In these cases, we see that the raytracing appoach is 12–17× faster.
Another model I've tested on is a simple CAD-based stellarator model with several layers (plasma, structure, blanket, etc.). On this model, I get the following timings:
- Point sampling, 150x150x150 regular mesh, 1 sample per element = 177.68 sec
- Raytracing, 150x150x150 regular mesh, 1 ray per element, 0.959 sec
In this case, you can see that the point sampling approach is excruciatingly slow for a mesh of this size. This is likely because point searches on the CAD geometry are much slower. To get reasonably converged material volumes would obviously require much more than 1 sample per element, meaning that the calculation would require hours of compute time. Raytracing here gives a 185× faster execution time.
Checklist
- [x] I have performed a self-review of my own code
- [x] I have run clang-format (version 15) on any C++ source files (if applicable)
- [x] I have followed the style guidelines for Python source files (if applicable)
- [x] I have made corresponding changes to the documentation (if applicable)
- [x] I have added tests that prove my fix is effective or that my feature works (if applicable)
It appears that you only ray-trace in a single direction within that bounding box. I'd recommend doing all three directions to avoid errors that come from think slices aligned in the direction of ray fire. Uniform spacing of the rays can exacerbate this, as a given material/cell could be completely missed, even if it takes up a non-trivial fraction of the volume.
@gonuke Good point, I'll take a stab at implementing that
Fantastic PR @paulromano. I'm looking forward to diving into this.
Along the lines of @gonuke's point, I wonder if there's a model we might be able to use to compare volume estimates for random ray and the raytracing method. For streaming gaps that are off-axis I wonder if RR might capture those even better. Maybe @jtramm has thoughts there?
One limitation of the new implementation is that the mesh must be fully contained in the geometry. I couldn't think of a good way around this because if you are raytracing, you have to start within the geometry.
There's a capability added in #2655, GeometryState::advance_to_boundary_from_void, that could help with this in a follow-on PR.
@pshriwise Thanks for the review; all your comments should be addressed now.