photutils icon indicating copy to clipboard operation
photutils copied to clipboard

Deblending sources has very large memory footprint

Open cmccully opened this issue 8 months ago • 1 comments

I am attempting to run image segmentation on an image from a reasonably large chip and the memory usage is higher than I would have hoped.

This script produces the said behavior.

from astropy.io import fits
from photutils.segmentation import make_2dgaussian_kernel, detect_sources, deblend_sources, SourceCatalog
from photutils.background import Background2D
from astropy.convolution import convolve
from astropy.convolution.kernels import CustomKernel
import numpy as np
from astropy.table import Table

hdu = fits.open('ogg0m404-sq30-20221126-0098-e91.fits.fz')
data = hdu['SCI'].data.copy()
error = hdu['ERR'].data.copy()
bkg = Background2D(data, (32, 32), filter_size=(3, 3))
data -= bkg.background
kernel = make_2dgaussian_kernel(1.9, size=3)
convolved_data = convolve(data / (error * error), kernel)
kernel_squared = CustomKernel(kernel.array * kernel.array)
normalization = np.sqrt(convolve(1 / (error * error), kernel_squared))
convolved_data /= normalization
segmentation_map = detect_sources(convolved_data, 2.5, npixels=9)

deblended_seg_map = deblend_sources(convolved_data, segmentation_map, npixels=9, nlevels=32,
                                    contrast=0.005, progress_bar=False, nproc=1)
catalog = SourceCatalog(data, deblended_seg_map, convolved_data=convolved_data, error=error,
                        background=bkg.background)
sources = Table({'x': catalog.xcentroid + 1.0, 'y': catalog.ycentroid + 1.0,
                 'xwin': catalog.xcentroid_win + 1.0, 'ywin': catalog.ycentroid_win + 1.0,
                 'xpeak': catalog.maxval_xindex + 1, 'ypeak': catalog.maxval_yindex + 1,
                 'peak': catalog.max_value,
                 'a': catalog.semimajor_sigma.value, 'b': catalog.semiminor_sigma.value,
                 'theta': catalog.orientation.to('deg').value, 'ellipticity': catalog.ellipticity.value,
                 'kronrad': catalog.kron_radius.value,
                 'flux': catalog.kron_flux, 'fluxerr': catalog.kron_fluxerr,
                 'x2': catalog.covar_sigx2.value, 'y2': catalog.covar_sigy2.value,
                 'xy': catalog.covar_sigxy.value,
                 'background': catalog.background_mean})

You can download the image from the url value at https://archive-api.lco.global/frames/56127995/

It appears to be taking 8 GB of ram to do the source deblending on this single image (see plot) Screenshot 2023-10-26 at 12 40 29 PM

While this doable for a single image on a laptop, this makes doing parallel reductions infeasible. Is there something in how we slice the images that is hanging on to memory when it is not needed?

cmccully avatar Oct 26 '23 16:10 cmccully