Turing.jl
Turing.jl copied to clipboard
Invalid test case for `Gibbs`?
I'm currently updating Turing.jl to be compatible with the new version of DynamicPPL.jl and ran into something that seems like a bug.
Currently, we implement the logdensity-evaluation required for MH here:
https://github.com/TuringLang/Turing.jl/blob/d1d4071b603d60d6ba969f3b14bfb347795644b7/src/inference/mh.jl#L251-L263
But now that we have a clear separation between sampling (SamplingContext) and evaluation (DefaultContext), this seems like it should now be:
function (f::MHLogDensityFunction)(x)
sampler = f.sampler
vi = f.vi
x_old, lj_old = vi[sampler], getlogp(vi)
set_namedtuple!(vi, x)
f.model(vi, DefaultContext()) # CHANGED
lj = getlogp(vi)
vi[sampler] = x_old
setlogp!(vi, lj_old)
return lj
end
To ensure that we're only evaluating, not sampling.
But making this change results in the following test-case to fail:
https://github.com/TuringLang/Turing.jl/blob/d1d4071b603d60d6ba969f3b14bfb347795644b7/test/inference/gibbs.jl#L91-L111
It complains that a component of m is missing. The issue is that once z has been sampled, the number of components has changed and so the size of m has changed. Currently this test will pass because MH will just silently sample this component from the prior when evaluating the logpdf of the proposed state.
It's unclear whether:
- This was intended in the first place.
- If it's even valid to do.
Uh yeah that seems quite wrong. The line you added
f.model(vi, DefaultContext()) # CHANGED
seems correct to me and it should be merged in.
As to the failing test, I am not sure -- I don't recall working with this model but I would expect this behavior to be supported.
@yebai Mentioned that it might "accidentally" end up doing the correct thing, but even so we probably should do this in a separate/explicit way rather than implicit like currently done.
Closed in favour of https://github.com/TuringLang/Turing.jl/issues/1904