Laplace
Laplace copied to clipboard
Confidence intervals for an embeddings layer
Hi Alex,
Thanks for the great package.
I am currently adapting Variational Autoencoders to estimate common latent variable models used in the social sciences.
I am mainly interested in building confidence intervals for the resulting encodings (i.e., the encoder's output) and less interested in the decoding (i.e., the decoder's output).
Is this feasible with your package? It is equivalent to building confidence intervals for intermediary outputs (e.g., an embedding layer) in a neural network.
Thanks, Germain
Something like this?
model = ...
train_loader = ...
model = train_model(model, train_loader) # SGD
# Laplace only on encoder
for p in model.encoder.parameters():
p.requires_grad = True
# Don't quantify uncertainty on decoder
for p in model.decoder.parameters():
p.requires_grad = False
la = Laplace(model, ...)
la.fit(train_loader)
la.optimize_prior_precision()
# Getting confidence estimate on encoder outputs
# See https://aleximmer.github.io/Laplace/api_reference/parametriclaplace/#laplace.baselaplace.ParametricLaplace.sample
N_SAMPLES = 10
encoder_params_samples = la.sample(n_samples=N_SAMPLES)
encoder_output_samples = []
for sample in encoder_params_samples:
# From torch.nn.utils
vector_to_parameters(sample, model.encoder.parameters())
encoder_output.append(model.encoder.forward(x_test))
encoder_output_samples = torch.vstack(encoder_output_samples)
enc_output = encoder_output_samples.mean(dim=0)
enc_output_var = encoder_output_samples.var(dim=0)
Untested; no guarantee that laplace-torch supports this OotB. But PR is welcome!