Distributions.jl
Distributions.jl copied to clipboard
Fix Dirichlet rand overflows #1702
Closes #1702
Core Issues
The rand(d::Dirichlet) calls Gamma(d.α[i]) i times and writes to x.
It then rescales this result by inv(sum(x)). When this overflows to Inf, we run into our 2 failure modes:
-
When all x_i == 0, we get Inf * 0 = NaN
-
When some x_i != 0, but are all deeply subnormal enough that
inv(sum(x))still overflows. We get some Inf values as a result.
For case 2, on Julia 1.11.0-rc1 on Windows, for example:
julia> rand(Xoshiro(123322), Dirichlet([4.5e-5, 4.5e-5, 8e-5]))
3-element Vector{Float64}:
Inf
Inf
NaN
Fixing Case 1
If case 1 is happening, the best thing possible from a runtime perspective is probably to just choose a random x from a categorical distribution with the same mean. This is the limit behavior of the Dirichlet distribution, and my logic on why it's "safe enough" is:
- If all-zeros are a rare occurance, this has little impact on the end sample
- If all-zeros are common, rejecting samples and pulling another will probably yield a near-infinite reject loop. On the other hand, we're close enough to the limit behavior that floating point arithmetic errors are probably hurting us more than adopting the limit behavior.
- While this should theoretically result in incorrect variance, testing shows that variance is within reasonable tolerance (0.01) of the real value.
There is another option where we could try rejecting all-0 samples until a certain maximum amount of samples before failing, but I think this is probably a waste of time for little gain in accuracy.
Fixing Case 2
We rescale all values by multiplying them by floatmax(), so inv doesn't overflow. This should work consistently for all float types where floatmax() * nextfloat() > floatmin() by at least ~1 magnitudes, which I think should be true for any non-exotic float types. I originally thought it would be enough to just set the largest value to 1, but it's actually possible to currently pull multiple subnormal values pre-normalization, and the method I adopted maintains the ratio between them.
Currently:
julia> rand(Xoshiro(123322), Dirichlet([4.5e-5, 4.5e-5, 8e-5]))
3-element Vector{Float64}:
Inf
Inf
NaN
After this patch:
julia> rand(Xoshiro(123322), Dirichlet([4.5e-5, 4.5e-5, 8e-5]))
3-element Vector{Float64}:
0.625061099164708
0.37493890083529186
0.0
Subnormal Parameters
While testing, I realized that my original fix for case 1 would break when all of the parameters themselves were deeply subnormal, e.g. Dirichlet([5e-321, 1e-321, 4e-321]). Given that the Dirichlet distribution is decently common in things like Bayesian inference, I thought it would be worth attempting to support these cases too.
Note that mean, var, etc. currently break on these deeply subnormally-parameterized distributions, but fixing that felt out of scope to this pull request. Fixing mean would be simple, but it could potentially be rather chunky. I am less sure about var and others.
Codecov Report
Attention: Patch coverage is 83.33333% with 11 lines in your changes missing coverage. Please review.
Project coverage is 86.17%. Comparing base (
b348b5b) to head (c77e35c).
| Files with missing lines | Patch % | Lines |
|---|---|---|
| src/samplers/expgamma.jl | 72.50% | 11 Missing :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## master #1886 +/- ##
==========================================
- Coverage 86.20% 86.17% -0.04%
==========================================
Files 146 147 +1
Lines 8769 8829 +60
==========================================
+ Hits 7559 7608 +49
- Misses 1210 1221 +11
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
:rocket: New features to boost your workflow:
- :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
Instead of dealing with subnormals, at least for the example here sampling in log space would be sufficient (see also https://github.com/JuliaStats/Distributions.jl/issues/1003#issuecomment-636450042, https://github.com/JuliaStats/Distributions.jl/issues/1003#issuecomment-556978582, and https://github.com/JuliaStats/Distributions.jl/issues/1810). For instance, with an ExpGamma version of the Marsaglia sampler I get:
julia> using Distributions, LogExpFunctions, Random
julia> using Distributions: GammaMTSampler
julia> # Inverse Power sampler in log-space (exp-gamma distribution)
# uses the x*u^(1/a) trick from Marsaglia and Tsang (2000) for when shape < 1
struct ExpGammaIPSampler{S<:Sampleable{Univariate,Continuous},T<:Real} <: Sampleable{Univariate,Continuous}
s::S #sampler for Gamma(1+shape,scale)
nia::T #-1/scale
end
julia> ExpGammaIPSampler(d::Gamma) = ExpGammaIPSampler(d, GammaMTSampler)
julia> function ExpGammaIPSampler(d::Gamma, ::Type{S}) where {S<:Sampleable}
shape_d = shape(d)
sampler = S(Gamma{partype(d)}(1 + shape_d, scale(d)))
return ExpGammaIPSampler(sampler, -inv(shape_d))
end
julia> function rand(rng::AbstractRNG, s::ExpGammaIPSampler)
x = log(rand(rng, s.s))
e = randexp(rng)
return muladd(s.nia, e, x)
end
julia> function myrand!(rng::AbstractRNG, d::Dirichlet, x::AbstractVector{<:Real})
for (i, αi) in zip(eachindex(x), d.alpha)
@inbounds x[i] = rand(rng, ExpGammaIPSampler(Gamma(αi)))
end
return softmax!(x)
end
julia> myrand!(Xoshiro(123322), Dirichlet([4.5e-5, 4.5e-5, 8e-5]), zeros(3))
3-element Vector{Float64}:
0.6250610991638559
0.37493890083615117
0.0
For instance, with an
ExpGammaversion of the Marsaglia sampler I get:
Okay, after doing some testing, this implementation seems to be superior to what I was doing until sum(alpha) itself is subnormal enough.
With your example implementation:
julia> myrand!(Random.default_rng(), Dirichlet([6e-309, 5e-309, 5e-309]), zeros(3))
3-element Vector{Float64}:
1.0
0.0
0.0
julia> myrand!(Random.default_rng(), Dirichlet([5e-309, 5e-309, 5e-309]), zeros(3))
3-element Vector{Float64}:
NaN
NaN
NaN
I brought in the code snippet from #1810 and that worked for a bit longer:
julia> function myrand2!(rng::AbstractRNG, d::Dirichlet, x::AbstractVector{<:Real})
for (i, αi) in zip(eachindex(x), d.alpha)
@inbounds x[i] = randlogGamma(αi)
end
return softmax!(x)
end
julia> myrand2!(Random.default_rng(), Dirichlet([5e-310, 5e-310, 5e-310]), zeros(3))
3-element Vector{Float64}:
0.0
1.0
0.0
julia> myrand2!(Random.default_rng(), Dirichlet([5e-311, 5e-311, 5e-311]), zeros(3))
3-element Vector{Float64}:
NaN
NaN
NaN
The good news though is that there's only 1 failure mode now: when rand(ExpGamma) == -Inf. I'll maintain an edge case check to go into the Categorical sampler failure mode.
@devmotion So this pull request's scope has gotten larger in a strange way.
New Summary of changes:
- Implement ExpGammaIPSampler (based off of your code above)
- Implement ExpGammaSSSampler (based off of #1810, with some improvements)
- Implement
_logsampler,_logrand, and_logrand!onGammafor these - Dirichlet
randnow has the following cases:- If any alpha are > 0.5, do what we were doing before
- I also tried to set this cutoff at 1, but this caused multiple DirichletMultinomial tests to error for reasons I do not yet have an explanation for.
- Else, try to sample via
_logrand- This dispatches to ExpGammaIPSampler for alpha > 0.3
- Else dispatches to ExpGammaSSSampler
- If even these fail (all
-Inf), use Categorical limit behavior fallback
- If any alpha are > 0.5, do what we were doing before
What this doesn't do:
- Document or export
ExpGammaIPSampler,ExpGammaSSSampler, or any of the_logsampling methods
This may seem a bit backwards, but I think that can be saved for another pull request later. The goal here is to close #1702.
I started writing a PR for the ExpGamma distribution and documentation. But this pr gets the dirichlet sampling right, which is really a harder problem and much more important. I will wait for it to merge and then promise to build on it, moving the undocumented methods to an expgamma.jl univariate distribution page.
@devmotion Could this be looked at again? Thanks.
Just wondering if there is an objection. Do we need to make expgamma.jl before this can be merged?
I wouldn't take the lack of response as objection so much as lack of maintainer bandwidth to review and respond (certainly speaking for myself, at least). I appreciate the contribution and your patience, @quildtide.
Though I'm not currently able to provide a thoughtful review, I can say that something that will make a future reviewer's job easier would be to include comments in the code that justify the choices of 0.5 and 0.3 as cutoffs where applicable.
Though I'm not currently able to provide a thoughtful review, I can say that something that will make a future reviewer's job easier would be to include comments in the code that justify the choices of 0.5 and 0.3 as cutoffs where applicable.
The 0.3 was based off of the note in Liu, Martin, and Syring that their algorithm's acceptance rate is higher until 0.3 when compared to algorithm 3 in Kundu and Gupta. I neglected, however, to notice that we do not currently have Kundu and Gupta's algorithm 3 implemented at the moment.
The 0.5 was mostly arbitary; it was originally 1, but a test failed when it was that high.
It's possible that these cutoffs are not optimal for performance reasons; I did not have time when I made this PR to do proper performance testing. I think I may do some of that in the near future.
I am also tempted to try implementing the Kundu-Gupta sampler now, but I reckon that would only make the PR harder to review.
I have pushed comments for now. I will do some performance testing to find potential better thresholds if I wind up having time to do so before this can be reviewed.
My general feeling is that subnormal numbers are not a major concern in Distributions - even if there are some improvements here, I assume there are many other problems both in upstream and downstream code. Floating point numbers are inherently limited, we can only operate within their restrictions. Alternatively, you might have to switch to number types with higher or arbitrary precision.
I think this is a reasonable position, especially since the subnormal edge case only emerges when all alphas themselves were already deeply subnormal (after implementing log-space sampling).
On the other hand, I think alternative samplers and distributions such as
ExpGammathat operate in log-space would be quite useful in different places (as evidenced by a few old issues I had opened a few years ago IIRC). So I think we should
- separate the
ExpGammapart, ie, add anExpGammadistribution + the samplers in a separate PR and make sure they are properly tested using the existing test infrastructure for distributions and samplers
I think @chelate was working on this. I can fork this branch to a branch with only the ExpGamma sampling so chelate can do a PR with that and their own work (testing, documentation, etc.).
- change this PR to use
ExpGammainDirichletwhen it's beneficial (requires numerical experiments + benchmarks)?
There's 2 types of testing that can be done.
We do already know that there's a cutoff for alphas around 4e-8 where the current method (no logspace sampling) breaks completely. So we already know that anything around this cutoff is already beneficial.
But determining a high bound for the cutoff would indeed require some benchmarking.
And then there's benchmarking for when to switch between Liu-Martin-Syring and the Inverse Power sampler (the one currently at 0.3). That one might actually be the one that requires more benchmarking, since performance between the log-space and current sampler should be similar.