scarlet
scarlet copied to clipboard
unify detection coadd methods
We build our own coadds for source initialization. The two methods are largely identical, so we should clarify how they are different or unify them.
In addition, we should canonize the use of deconvolution coadds for initialization of all sources bright enough to pull it off. For everything else, the new compact initialization from #204 should be best, unless we know something about the sources, e.g. that it is a star.
I'd like to suggest a slight twist to this issue, although I would also be fine with making it a separate issue if you two don't feel like they are related enough. We talked about this a while back, but I'd like to see the whole initialization module become more modular. There's a lot of duplicate code, and code blocks that can be useful for users who might want to do their own initialization but don't want to have to copy and paste much of our initialization code.
For example, we now have two different methods for initializing the SED
, and users might have their own custom methods (especially if they have color priors). So one change along these lines would be to call it init_extended_source_morphology
and take the SED
as an input parameter, as well as a parameter to specify whether or not the morphology should be trimmed.
I think a modularization is sensible but not trivial. I'm also not sure who many people would use it.
The problem I see is that initialization of multiple parameters are usually interdependent. They should also respect the parameter constraints. It's hard to ensure those things and break up the init sequences into modular steps.
That said, I'm reviewing the steps we take, especially around the choice of coadds (the scope of this issue).
After review I think that modularizing the init methods is doable. I'll describe this in another issue.
As for the detection coadd, here's my proposal.:
- We compute the deconvolved image in
Observation.match
or aftermatch
has been called, and store the result as a member in `Observation. - We also have to propagate a noise field, with the same variance as
Observation.image
, through the deconvolution to determine the effective noise level in the deconvolved image. We store that noise level inObservation
as well. - We can then combine this, across multiple observations if needed, with the source spectrum and build the optimal SNR detection image as in
initialization.build_sed_coadd
. This operation is fast and can be done for every source independently. - That detection image then defines the initial morphology.
Doing so allows us to
- put the deconvolution and its result where it all the necessary pieces are (namely in
Observation
) - remove
coadd
from the parameter list of the source constructors - codify best practice of source initialization in the deconvolved frame
I have implemented the deconvolution coadd as default detection image for extended source inits. This provides much better initial logL and reduces the number of iterations.
There are two dangling ends, both of which could use input from @herjy:
-
build_initialization_coadd
implements starlet filtering to remove largest scales. What's the use case for that, and should we enable/fold it in with the deconvolution coadd? - The new detection coadd method simply reads out
Observation.prerender_images
, which is build duingmatch()
. To enable the same thing forLowResObservation
, do you recommend that we interpolate to high-res and divide out the diff kernel?
- I don't remember doing that and I'm not super confident about that strategy. It helps cutting off the low frequencies, but in a weird way that may not correspond to deconvolution. I would rather not unless we feel that our initial models could use some more compactness. The issue is that with the noise exploding in the deconvolution, we don't want to lose power in the signal.
- I think it is the less incorrect thing to do yes. I'm a little worried about interpolation artefacts potentially messing with the division, but I think I did similar things before that did not blow up. It's worth trying.
- Can I remove
build_initialization_coadd
then? - As for deconvolution after interpolation, I'm trying it but it looks wrong. I'm also puzzled that the diff kernel has more pixels than the observation itself when both are up-sampled to the model frame. I'll push my code, but it would help if you could have a look.
- If it's taken care of somewhere else then yes.
- I will, it is possible that there is a mismatch in the shapes due to some padding or what not.
- Re
build_initialization_coadd
: if you are not using it, and don't even think it's a good idea, can I remove it? I would rather do it now than to port it to the new deconvolution init. - As for the deconvolution in LRObservation: hold off for now, I'm experimenting with an alternative, more stable approach.