pymc icon indicating copy to clipboard operation
pymc copied to clipboard

Handle NaNs in the logp in SMC

Open astoeriko opened this issue 10 months ago • 5 comments

Description

The changes deal with the case that the logp is NaN in the SMC sampler. With the changes, samples with a NaN-logp will be simply discarded during the resampling step by assigning them a logp of -inf.

I know that this is not an ideal solution to the problem, but rather a pragmatic workaround. Maybe it would be wise to add a warning that there were NaN values?

Related Issue

  • [x] Closes #7292
  • [ ] Related to #

Checklist

Type of change

  • [ ] New feature / enhancement
  • [x] Bug fix
  • [ ] Documentation
  • [ ] Maintenance
  • [ ] Other (please specify):

📚 Documentation preview 📚: https://pymc--7293.org.readthedocs.build/en/7293/

astoeriko avatar Apr 29 '24 13:04 astoeriko

Thank You Banner] :sparkling_heart: Thanks for opening this pull request! :sparkling_heart: The PyMC community really appreciates your time and effort to contribute to the project. Please make sure you have read our Contributing Guidelines and filled in our pull request template to the best of your ability.

welcome[bot] avatar Apr 29 '24 13:04 welcome[bot]

Thank you for the PR!

It would be nice if we can come up with a test that this is in fact enough. Maybe we can arbitrarily introduce some nan values into a model, and sample it? So for instance

with pm.Model():
    x = pm.Normal("x")
    pm.Normal("y", mu=x, sigma=0.1, observed=1)
    # Return nan in 50% of the prior draws
    pm.Potential("make_nan", pt.where(pt.geq(x, 0), 0, np.nan))

aseyboldt avatar Apr 29 '24 15:04 aseyboldt

Testing this with a simple example would be great! I was wondering if there is a way to artificially introduce NaN values in a model. So thanks for providing an example, this will help me verify with a simpler model if the problems I am seeing when sampling with SMC are indeed related to not handling NaN values. I am still not very clear about what the test should test in the end. That we do not end up with a single sample per chain?

astoeriko avatar Apr 29 '24 16:04 astoeriko

I think checking that all samples are positive and the posterior variance is reasonable should be enough. The true posterior standard deviation should be $\sqrt{(1 + 1/100)^{-1}} \approx 0.1$, so maybe we just check that it's between 0.05 and 0.2 or so? We could also be more thorough and do a ks test against the true posterior, but I think for our purposes here that shouldn't be necessary.

aseyboldt avatar Apr 29 '24 16:04 aseyboldt

Hi @astoeriko, are you still interested in this PR?

aloctavodia avatar Jul 16 '24 12:07 aloctavodia