spender
spender copied to clipboard
using trained normalizing flow model to get P_galaxy, P_star probabilities
Thanks for a terrific package!
A student, @GraceScherer, and I have successfully reproduced most of the functionality of the auto-encoder developed for the DESI/BGS sample in Liang et al. (2023). However, we're having some issues using the (pre-trained) normalizing flow model.
We grabbed the model with
import torch
from accelerate import Accelerator
github = 'pmelchior/spender'
flow = torch.hub.load(github, 'desi_edr_galaxy_flow', map_location=accelerator.device)
And we've been playing with a small dataset of 170 objects from a single healpixel:
outdir = './'
desi, galaxy_model = torch.hub.load(github, 'desi_edr_galaxy', map_location=accelerator.device)
objs = [('sv3', 'bright', 16041, 'BGS')]
desi.save_in_batches(outdir, objs)
spec, w, z, targetid, norm, zerr = desi.prepare_spectra(os.path.join(outdir, '16041',
'coadd-sv3-bright-16041.fits'), target='BGS')
Now we're trying to recreate some version of the outliers catalog for these objects: https://hub.pmelchior.net/spender.desi-edr.full-bgs-objects-logP.txt.bz2
Can anyone provide guidance on how to do this?
Sure! I think what you're asking is how to compute the log probabilities of some spectra with the pretrained flow, right?
If so, this should work
desi, model = torch.hub.load(github, 'desi_edr_galaxy', map_location=accelerator.device)
s = model.encode(spec)
probs = flow.log_prob(s)
This was exactly what we needed! Thank you very much for your help!
I should let you know that as of version 0.2.6, we recommend a standard pip install spender and then this pattern:
flow = spender.hub.load('desi_edr_galaxy_flow', map_location=accelerator.device)
desi, model = spender.hub.load('desi_edr_galaxy', map_location=accelerator.device)
Ah, I see! We'll use that pattern from here on out. Thank you again!