sbi icon indicating copy to clipboard operation
sbi copied to clipboard

Allow import of `NeuralPosteriorEnsemble` from sbi.utils

Open jecampagne opened this issue 1 year ago • 19 comments

Hello, See the question at the end of the post concerning the use of multi observations (ie. amortized).

I'm playing with the use case exposed in the The flexible interface doc

The prior and simulator are defined as

theta_dim = 3
prior = utils.BoxUniform(low=-5 * torch.ones(theta_dim), high=5 * torch.ones(theta_dim))
def simulator(theta):
    # N(theta, sigma=0.1)
    return theta + torch.randn_like(theta) * 0.1

Then I train a "SNRE" method

simulator, prior = prepare_for_sbi(simulator, prior)
inference = SNRE(prior=prior)
num_sim = 2000
theta, x = simulate_for_sbi(simulator, proposal=prior, num_simulations=num_sim)
inference = inference.append_simulations(theta, x)
density_estimator = inference.train()
posterior = inference.build_posterior(density_estimator)

Now I consider a single observation

true_theta = np.array([2.0, -1.5, 0.5])
n_observations = 1
observation = torch.tensor(true_theta)[None] + 0.1*torch.randn(n_observations, theta_dim)
spls = posterior.sample((200,), x=observation[0])

I get this kind of KDE plots (hdi_probs=[0.393, 0.865, 0.989]) : the red dots are locations of the true_theta image

If I now consider 100 observations with the posterior previously trained

n_observations = 100
observations = torch.tensor(true_theta)[None] + 0.1*torch.randn(n_observations, theta_dim)
spls_1 = posterior.sample((200,), x=observations

I get now these KDE plots image

What is puzzling me is not that the contours shrink with 100 times more observations, but the fact that the true_theta are largely excluded. I would have expected a shrinkage around the true_theta considering the way the observations are generated.

Q: Do I have missed something?

(nb. the same behaviour with the other SNPE, SNLE methods)

Thanks

jecampagne avatar Oct 12 '22 08:10 jecampagne