STIR
STIR copied to clipboard
Added downsampling of BlocksOnCylindrical scanners in scatter
Changes in this pull request
Added the functionality to downsample BlocksOnCylindrical scanners in views and detectors per ring during scatter simulation, in order to speed up the computations. The default behaviour did not change.
Testing performed
This was tested with iterative scatter estimation and simulation on NeuroLF and BPET blocks geometry and compared to non-downsampled scatter estimation. The differences were of the order of 1% when the crystals per ring and views were both downsampled by a factor of 2.
Related issues
fixes https://github.com/UCL/STIR/issues/1290
Checklist before requesting a review
- [x] I have performed a self-review of my code
- [] I have added docstrings/doxygen in line with the guidance in the developer guide
- [] I have implemented unit tests that cover any new or modified functionality (if applicable)
- [x] The code builds and runs on my machine
- []
documentation/release_XXX.md
has been updated with any functionality change (if applicable)
original scatter:
below are plots of the differences between downsampled scatter and original scatter, relative to mean(abs(original_scatter))=11.5
downsampling factor 1.066667 (from 32 to 30 detectors per module)
downsampling factor 1.28 (from 32 to 25 per module)
downsampling factor 1.6 (from 32 to 20 per module)
downsampling factor 2 (from 32 to 16 per module)
downsampling factor 3.2 (from 32 to 10 per module)
relative difference between original scatter and downsampled scatter with factor 2 (divided by mean intensity of areas with intensity > 0.1 => 0.5)
Therefore it looks like choosing integer factors for the downsampling isn't required, and the intersections between the modules remain problematic (sometimes over 10% errors in the scatter estimate). However, on the final image the differences caused by the downsampled scatter are below 3% of the average hot values.
As part of this investigation, a bug was found: https://github.com/UCL/STIR/issues/1396 This should be fixed on the ProjData side, but it raises another problem with the scatter estimation code itself: for the last iteration we first upsample from the downsampled scanner to the full-size scanner here: https://github.com/UCL/STIR/blob/8ced2d73933420457e0bc76074964b7d4ff00f0c/src/scatter_buildblock/ScatterEstimation.cxx#L937-L946 And then we run upsample_and_fit_scatter_estimate a second time here: https://github.com/UCL/STIR/blob/8ced2d73933420457e0bc76074964b7d4ff00f0c/src/scatter_buildblock/ScatterEstimation.cxx#L1017-L1026
The second call will fail once #1396 is fixed, since the input ProjData are correctly identified as not homogeneously sampled in axial direction. However, we also don't need the second interpolation step, because input and output ProjData have the same dimensions. Therefore, I would propose to replace that call with a simple inverse_SSRB followed by scatter_normalisation.undo(). This is all that is needed at this point.
I'm hoping that once this is done, we can avoid the almost repeated code for the blocks and cylindrical case, but maybe do that in stages.
Do you mean in interpolate_projdata? This would indeed be very nice. I'll have a look at it!
I'm hoping that once this is done, we can avoid the almost repeated code for the blocks and cylindrical case, but maybe do that in stages.
Do you mean in interpolate_projdata? This would indeed be very nice. I'll have a look at it!
there's some in the ray-tracing projector as well, but that's on our wish-list :-)
The second call will fail once #1396 is fixed, since the input ProjData are correctly identified as not homogeneously sampled in axial direction.
ok, although we could at least in principle make that check more sophisticated to see if needs to interpolate or not, but I'm fine with not doing that!
However, we also don't need the second interpolation step, because input and output ProjData have the same dimensions. Therefore, I would propose to replace that call with a simple inverse_SSRB followed by scatter_normalisation.undo(). This is all that is needed at this point.
good point!
On a simulated cylinder, the scatter estimation has much smaller errors when we use linear interpolation instead of quadratic interpolation. I strongly suspect that this is because we quite aggressively downsample the scanner (e.g. 8 instead of 48 rings) to speed up the computation time. Quadratic interpolation simply can't cope with that level of simplification. Below is the quadratic interpolation vs. ground truth first, then the linear interpolation vs. ground truth:
Below is the difference of the downsampled (half detectors per ring) scatter estimation and the ground truth scatter followed by the non-downsampled scatter estimation. The scatter amplitude is a little over 8, so in both cases the errors are <2%.
And here is the same comparison with the reconstructed images (ground truth image has intensity 1, so the errors are usually below 1% inside the cylinder and only higher at the edge):
Plots as at the beginning of the PR: scatter for Hoffman phantom data with downsampling of 2 compared against no downsampling.
I haven't checked the actual interpolation code, but trust you! However, some documentation what the code is doing would be good, as it's quite hard to see.
Ok, I will add an overall description.