SIRF torch should be part of SIRF
currently SIRF torch is in SIRF exercises however it would make more sense for it to be part of SIRF itself
I suppose you are talking about https://github.com/SyneRBI/SIRF-Exercises/blob/master/notebooks/Deep_Learning_PET/sirf_torch.py
Surely it is in the plan for this funding round of CCP SyneRBI to integrate SIRF with DL libraries.
Hi both, I am looking at this at the moment. Would it make more sense to put it in SIRF or SIRF-contribs? (Also @KrisThielemans)
@KrisThielemans and I had a chat about this a few weeks ago and we thought that would make sense to make it part of SIRF. @paskino yes that one.
I think indeed SIRF, as it will become more and more essential. However, it needs a bit of a plan on how to do this, and what to put where.
Also, ideally we do this such that it works for any operator in SIRF (e.g. MR acquisition model, Reg.resampler), so avoid variable names like "sinogram" or even "image", at least in the main framework. With any luck, we can set it up for a CIL operator as well (which then starts to look like it should be part of CIL, which might make sense, but has some drawbacks). Indeed, a lot of this is going to be the same for CIL/SIRF/STIR, so it's not obvious to me where to put it.
(Note, there's also the Hessian (i.e. gradient of the gradient operator), as implemented by @gschramm for his exercises after our discussions.)
@Imraj-Singh , feel like making some proposal for discussion?
So I had a think, and we could add it into the common functionality: i.e. SIRF_torch.py file to "SIRF/src/common/". See here: #1305.
This would be easier (and I think make more sense) than trying to integrate it deeper into the cmake as an option that when toggled requires torch as a dependency.
Yes, we will add the Hessian so we can backpropagate through the gradient (I think that's correct... I will double check.)
Perhaps we can move discussion to #1305 ?