k-diffusion icon indicating copy to clipboard operation
k-diffusion copied to clipboard

Loglikelihood Evaluation Question

Open Me-lab opened this issue 1 year ago • 14 comments

In Readme it is claimed that k-diffusion supports log likelihood calculation, is there any demo code in this repo?

Me-lab avatar Aug 30 '23 03:08 Me-lab

Hello there!

I did some searching, but unfortunately, I couldn't locate any demo code in the repository. However, I am happy to share how I did it:

import k_diffusion as K

# Load your model and sigmas from your configuration
model = ...  # Load your model
sigma_min, sigma_max = ...  # Load sigmas from config

# Load a sample datapoint from your dataset 
x, y = next(iter(dataloader))

# Calculate the log likelihood using k-diffusion
log_likelihood = K.sampling.log_likelihood(model, x, sigma_min, sigma_max)

# Interpretation: Higher values indicate higher likelihood, while lower values indicate lower likelihood

Feel free to adjust the code to match your specific implementation and needs. Don't hesitate to ask if you have any further questions!

Cheers!

EDIT: Changed according to @crowsonkb comment below

lowlorenz avatar Sep 04 '23 13:09 lowlorenz

Oh I forgot to answer this! K.sampling.log_likelihood() returns positive log likelihood, not NLL, higher values indicate higher likelihood.

crowsonkb avatar Sep 04 '23 13:09 crowsonkb

Also a footgun to be aware of: if you are evaluating log likelihood in a distributed environment you should not use a DDP wrapper on the denoiser's inner model because I use an adaptive step size ODE solver to compute it and it will do different numbers of forward and backward passes per rank and this will cause hangs and other extremely bad behavior if there is a DDP wrapper. I spent most of yesterday debugging this, I was training a diffusion model and evaluating the log likelihood of the validation set every 10,000 steps. I hope this saves people some time...

crowsonkb avatar Sep 04 '23 13:09 crowsonkb

Could you provide an example? I get weird results, when I load like this:

config = K.config.load_config(open(args.config))
model_config = config['model']
size = model_config['input_size']

accelerator = accelerate.Accelerator()
device = accelerator.device
print('Using device:', device, flush=True)

inner_model = K.config.make_model(config).eval().requires_grad_(False).to(device)
inner_model.load_state_dict(torch.load(args.checkpoint, map_location='cpu')['model_ema'])
accelerator.print('Parameters:', K.utils.n_params(inner_model))
model = K.Denoiser(inner_model, sigma_data=model_config['sigma_data'])

sigma_min = model_config['sigma_min']
sigma_max = model_config['sigma_max']

lowlorenz avatar Sep 04 '23 14:09 lowlorenz

What sort of weird results? I think that should work, the problem I had was triggered by calling accelerator.prepare() on the inner_model.

crowsonkb avatar Sep 04 '23 14:09 crowsonkb

So I was getting values higher than expected (~7.500). If this is the log likelihood and not nll it would lead the following in my case:

ll = K.sampling.log_likelihood(model, x, sigma_min, sigma_max)
torch.exp(ll) # --> tensor([inf, inf, inf, inf, inf], device='cuda:0')

Should I rather provide inner_model instead of model?

lowlorenz avatar Sep 04 '23 14:09 lowlorenz

I sum over the dimensions of each batch item when computing log likelihood so it is the log likelihood of the entire example, not per dimension, so it can be very low or very high compared to what you would expect if it were per dimension.

crowsonkb avatar Sep 04 '23 14:09 crowsonkb

But x should still be in batch shape right? So x.shape == (batch_size, 3, 32, 32)

lowlorenz avatar Sep 04 '23 14:09 lowlorenz

But x should still be in batch shape right? So x.shape == (batch_size, 3, 32, 32)

Yes. If you want per-dimension log likelihoods you need to divide the returned log likelihoods by (3 * 32 * 32).

crowsonkb avatar Sep 04 '23 14:09 crowsonkb

Should I rather provide inner_model instead of model?

model is correct, it won't know what to do with inner_model.

crowsonkb avatar Sep 04 '23 14:09 crowsonkb

But x should still be in batch shape right? So x.shape == (batch_size, 3, 32, 32)

Yes. If you want per-dimension log likelihoods you need to divide the returned log likelihoods by (3 * 32 * 32).

Even then I still receive higher probabilities than 1. I feel like I am missing something.

torch.exp(ll[0] / (32*3*32)) # tensor([ 9.4208, 10.8980, 20.1980, 12.8322], device='cuda:0')

lowlorenz avatar Sep 04 '23 14:09 lowlorenz

Since diffusion models operate on continuous data, the probability of sampling any given data point is not really defined, so for "log likelihood" we are evaluating a log probability density function instead of returning a log probability. The pdf can take on higher values than 1 locally in a small region.

crowsonkb avatar Sep 04 '23 14:09 crowsonkb

Maybe the most useful value for you to compute is bits/dim? See the top Stack Overflow answer to this question for example, which talks about how to turn a continuous log likelihood (what k-diffusion gives you) into this value. https://stats.stackexchange.com/questions/423120/what-is-bits-per-dimension-bits-dim-exactly-in-pixel-cnn-papers

crowsonkb avatar Sep 04 '23 14:09 crowsonkb

I will look into it. Thank you so much!

lowlorenz avatar Sep 04 '23 14:09 lowlorenz