Turing.jl icon indicating copy to clipboard operation
Turing.jl copied to clipboard

Particle Gibbs does not return log probability

Open colehurwitz opened this issue 5 years ago • 6 comments

Would it be possible to return the log probability when using PG?

colehurwitz avatar Jan 08 '19 19:01 colehurwitz

If you have usage questions, please post a minimal working example. These questions are also more suited to the Julia slack (https://julialang.slack.com) channel #turing.

mohamed82008 avatar Jan 08 '19 20:01 mohamed82008

@colehurwitz31 You should be able to extract the log probability from the chain for each sample after you've run it using chain[:lp], with chain being the variable containing the value returned from sample. Does that work for you?

cpfiffer avatar Jan 09 '19 03:01 cpfiffer

@mohamed82008 @cpfiffer Sorry for the confusing - I was helping @colehurwitz31 with Turing and found this issue so I asked him to create an issue on this.

Yes PG indeed doesn't report log-joint in chain correct - all :lp in the chain are 0s.

using Turing

@model gdemo(x, y) = begin
    s ~ InverseGamma(10,3)
    m ~ Normal(0,sqrt(s))
    x ~ Normal(m, sqrt(s))
    y ~ Normal(m, sqrt(s))
    return s, m
end

x, y = 1.5, 2.0

chn = sample(gdemo(x, y), PG(20, 500))
chn[:lp] # => all 0s

xukai92 avatar Jan 09 '19 08:01 xukai92

Because PG doesn't track the log-joint, there is no way to extract it directly - is my understanding correct? @yebai

I guess a simple solution is to evaluate the log-joint after each MCMC step, though it adds some more computation (ofc we should have an interface to make it optional). Otherwise we should provide some utility functions to re-evaluate the log-joint for each sample in the MCMC chain.

xukai92 avatar Jan 09 '19 09:01 xukai92

Because PG doesn't track the log-joint, there is no way to extract it directly - is my understanding correct? @yebai

That's right, the weight associated with each particle is reset to 1 after each resampling step. I think this feature has been brought up in the past, see related issues: https://github.com/TuringLang/Turing.jl/issues/493 https://github.com/TuringLang/Turing.jl/issues/426. It's actually really simple to support this feature using API from https://github.com/TuringLang/Turing.jl/issues/634

logp(model, vi)

However, if we accumulate log weights and store them somewhere, we can avoid an extra call to runmodel (or logp(model, vi) using the new API from https://github.com/TuringLang/Turing.jl/issues/634).

yebai avatar Jan 09 '19 22:01 yebai

However, if we accumulate log weights and store them somewhere, ...

So it seems to me we'd add another field to the ParticleContainer right?

xukai92 avatar Jan 10 '19 14:01 xukai92

Likely out of date due to the effort of separating particle MCMC and SMC samplers into https://github.com/TuringLang/AdvancedPS.jl

yebai avatar Nov 12 '22 20:11 yebai