Default float type to float(Real), not Real (#685)
The same as #685, just ported to the newest version.
Pull Request Test Coverage Report for Build 11690510349
Details
- 1 of 2 (50.0%) changed or added relevant lines in 1 file are covered.
- 30 unchanged lines in 9 files lost coverage.
- Overall coverage decreased (-0.5%) to 81.881%
| Changes Missing Coverage | Covered Lines | Changed/Added Lines | % |
|---|---|---|---|
| src/utils.jl | 1 | 2 | 50.0% |
| <!-- | Total: | 1 | 2 |
| Files with Coverage Reduction | New Missed Lines | % |
|---|---|---|
| ext/DynamicPPLForwardDiffExt.jl | 1 | 83.33% |
| src/utils.jl | 2 | 83.9% |
| src/model_utils.jl | 2 | 37.5% |
| src/varnamedvector.jl | 2 | 89.66% |
| src/test_utils.jl | 3 | 85.31% |
| src/debug_utils.jl | 4 | 54.01% |
| src/varinfo.jl | 4 | 86.9% |
| src/threadsafe.jl | 5 | 50.86% |
| src/DynamicPPL.jl | 7 | 46.15% |
| <!-- | Total: | 30 |
| Totals | |
|---|---|
| Change from base Build 11667882952: | -0.5% |
| Covered Lines: | 3439 |
| Relevant Lines: | 4200 |
💛 - Coveralls
Codecov Report
Attention: Patch coverage is 50.00000% with 1 line in your changes missing coverage. Please review.
Project coverage is 81.88%. Comparing base (
b96e368) to head (9f2cd3c). Report is 3 commits behind head on master.
| Files with missing lines | Patch % | Lines |
|---|---|---|
| src/utils.jl | 50.00% | 1 Missing :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## master #686 +/- ##
==========================================
- Coverage 82.14% 81.88% -0.27%
==========================================
Files 30 30
Lines 4200 4200
==========================================
- Hits 3450 3439 -11
- Misses 750 761 +11
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Obviously broken. Next week's problem!
to be honest it seems like all of the errors are ReverseDiff related 🤷♀️
this PR does fix the original problem though, which was that all of the calls to log...(model, chain) would fail with >1 thread
using Turing
@model function f(x)
ns ~ filldist(Normal(0, 2.0), 3)
m ~ Uniform(0, 1)
x ~ Normal(m, 1)
end
model = f(1)
chain = sample(model, NUTS(), MCMCThreads(), 10, 4);
loglikelihood(model, chain)
logprior(model, chain)
logjoint(model, chain)
Also, the ReverseDiff tests only fail on <= Julia 1.9 (I tested 1.10 and 1.11 locally and they both work), so maybe the solution here is to bump min compat to 1.10.
EDIT: CI is failing, but there aren't any new failures compared to master (all the failures are accounted for by our other open issues).
Also, the ReverseDiff tests only fail on <= Julia 1.9 (I tested 1.10 and 1.11 locally and they both work), so maybe the solution here is to bump min compat to 1.10.
What exactly is the failure? If it's that we get slightly different numbers from sampling, then this mightbe RNG related or whatever, but we should identify what's the cause before brushing this off. If it's actual gradient computations being different, then we should identify that so we know.
Isn't it a bit annoying to just merge this if the CI is failing? IMO we hsould really at least identify why ReverseDiff is failing. From the logs it just seems to produce different results, in which case we should determine why.
Isn't it a bit annoying to just merge this if the CI is failing?
To be clear, I consider a review approval to be orthogonal to seeing tests pass, i.e. generally we should have both. Thus I often approve on a basis of "code looks good, just needs tests to pass", if I know getting those tests to pass won't require significant changes to the code I've read. Do you operate on a different basis, where one should approve only once CI is happy?
To be clear, I consider a review approval to be orthogonal to seeing tests pass, i.e. generally we should have both. Thus I often approve on a basis of "code looks good, just needs tests to pass", if I know getting those tests to pass won't require significant changes to the code I've read. Do you operate on a different basis, where one should approve only once CI is happy?
Ah gotcha 👍 And no, I also operate under similar approaches:)
But specifically here it seems we've changed the CI version due to ReverseDiff.jl bugs to make the CI pass, which goes a bit beyond maybe? Do you have the logs @penelopeysm ? Happy to bump the version to LTS, as we shouldn't maintain older versions if it's burden 👍 But useful if we know why we did it.
Do you have the logs @penelopeysm ? Happy to bump the version to LTS, as we shouldn't maintain older versions if it's burden 👍 But useful if we know why we did it.
A bump on this:)
ArgumentError: Converting an instance of ReverseDiff.TrackedReal{Float64, Float64, ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}} to Float64 is not defined. Please use `ReverseDiff.value` instead.
https://github.com/TuringLang/DynamicPPL.jl/actions/runs/11616496161/job/32349580296#step:6:357
To some extent it is me chasing the green tick 😅 but imo CI also reflects our confidence in the code. If we have lots of "disable this on 1.6" or "if version < 1.7 then @test_broken", it is basically untested code.
Are we able to tell if anybody is using DPPL on <1.10, given that Turing is already >=1.10?
I don't think anyone is directly depending on DynamicPPL other than Turing.jl, see
https://juliahub.com/ui/Packages/General/DynamicPPL
If we have lots of "disable this on 1.6" or "if version < 1.7 then @test_broken", it is basically untested code.
Definitively not pro this 👍
My point is more that we shouldn't just bump Julia versions to make something work unless we can first understand what's happening.
Looking at the stacktrace it seems we're trying to setindex! on a Vector{Float64} with a TrackedReal. AFAIK this should not be happening and so I'm wondering a) why, and b) why bumping to Julia 1.10 fixes this o.O