Partial fix for #2095
This is a partial fix for #2095. Currently, HMC won't work with vectors, etc. of variables which have differently sized support in the transformed space, e.g.
@model function vector_of_dirichlet(::Type{TV}=Vector{Float64}) where {TV}
xs = Vector{TV}(undef, 3)
for i = 1:3
xs[i] ~ Dirichlet(ones(5))
end
end
model = vector_of_dirichlet()
sample(model, NUTS(), 1000)
breaks with a similar error to that mentioned in #2095.
This occurs because unflatten, which is used in DynamicPPL.LogDensityFunction, attempts to reconstruct using a VarInfo which has a much larger vector for underlying storage than the one provided as input, e.g. varinfo is working with 3 x 5 dimensional xs but is only using a subset of the indexes, while the provided vector will be (3 - 1) x 5 dimensional since it's in linked space.
We can fix this by simply using the immutable DynamicPPL.link, which results in the "underlying" varinfo also using (3 - 1) x 5 dimensional storage for xs.
Thanks, @torfjelde -- looks good to me, but the use of an immutable link seems to break some existing functionality.
Yeah I'll have a look at this later :+1:
Okay, so I figured out what the issue is, and the fix is somewhat annoying.
DynamicPPL.(inv)link does not currently support specification using the Sampler, and so it links the entire VarInfo, even though only a sub-set of the VarInfo needs linking.
We can add this quite "easily" for TypedVarInfo, but for UntypedVarInfo things become much more annoying :confused:
Okay, so this should now be fixed by https://github.com/TuringLang/DynamicPPL.jl/pull/542
EDIT: Yep, the failing example now runs nicely locally.
Is there a reason why we're doing a lot of progress reporting during tests @yebai @devmotion ? It makes it quite annoying to determine which tests are actually failing by inspecting the logs :confused:
Is there a reason why we're doing a lot of progress reporting during tests @yebai @devmotion ? It makes it quite annoying to determine which tests are actually failing by inspecting the logs 😕
I don"t recall any previous discussions. I think it would be good to check that progress logging "works" by sampling with progress logging in the tests - but I think this could be limited to a few selected examples and in the remaining sample calls we could disable it. Since generally progress logging introduces overhead and slows down sampling, this could even improve CI times.
There are no specific reasons for excessive sampling logs. @torfjelde, maybe open an issue to track suggestions for a test refactoring PR?
Pull Request Test Coverage Report for Build 8057174497
Details
- 0 of 1 (0.0%) changed or added relevant line in 1 file are covered.
- No unchanged relevant lines lost coverage.
- Overall coverage remained the same at 0.0%
| Changes Missing Coverage | Covered Lines | Changed/Added Lines | % |
|---|---|---|---|
| src/mcmc/hmc.jl | 0 | 1 | 0.0% |
| <!-- | Total: | 0 | 1 |
| Totals | |
|---|---|
| Change from base Build 8057136657: | 0.0% |
| Covered Lines: | 0 |
| Relevant Lines: | 1368 |
💛 - Coveralls
Codecov Report
Attention: Patch coverage is 0% with 1 lines in your changes are missing coverage. Please review.
Project coverage is 0.00%. Comparing base (
4b5e4d7) to head (0aa9b8a).
| Files | Patch % | Lines |
|---|---|---|
| src/mcmc/hmc.jl | 0.00% | 1 Missing :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## master #2096 +/- ##
======================================
Coverage 0.00% 0.00%
======================================
Files 21 21
Lines 1368 1368
======================================
Misses 1368 1368
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.