photutils
photutils copied to clipboard
IterativePSFPhotometry: high memory usage due to a deepcopy?
Thanks for developing the PSFPhotometry
and IterativePSFPhotometry
classes! I have been using them to extract stars from JWST images and they appear to be working quite well. However, I noticed that the IterativePSFPhotometry
class takes up a lot of memory, making it very difficult to apply to dense star fields (~50,000 - 60,000 stars). It appears to use significantly more memory than v1.8's IterativelySubtractedPSFPhotometry
class, which is now deprecated.
I think the issue might be that there is a deepcopy
call inside IterativePSFPhotometry
which duplicates the PSFPhotometry
object after a round of star-fitting is completed. This effectively saves the output from that iteration and then the PSFPhotometry
object is reused for the next round of star-fitting.
To show this, I attach a plot of memory usage vs. time for a single iteration of star-finding with the IterativePSFPhotometry
object (maxiters=1, grouper=None, using WebbPSF PSF model). This is run on a sub-image which contains ~5500 stars. The long positive slope from 25 - 250s is from the PSF fitting via PSFPhotometry
, which only uses ~25% more memory than the old IterativelySubtractedPSFPhotometry
did in v1.8. However, afterward we see the sharp spike in memory due to the deepcopy
. I also attach a screenshot of the line-by-line memory profile of IterativePSFPhotometry
which indicates this.
So, is there a way we can avoid the deepcopy
of the PSFPhotometry
object? Or, can we do the deepcopy
of the PSFPhotometry
object after it is initialized but before any fitting is done, so the fit outputs aren't duplicated as well? Currently I'm doing a hack where I initialize a new PSFPhotometry
object for each star-finding iteration rather than calling (and overwriting the results of) the existing PSFPhotometry
object. It isn't pretty but seems to work OK. Thanks!
Python: 3.10 photutils: 1.10.0 astropy: 6.0.0 numpy: 1.25.2 Operating system: macOS 12.5