David Widmann
David Widmann
I added Enzyme tests to DistributionsAD a while ago (but maybe only in a local branch it seems?) but at that point there were too many test failures that I...
Based on the name, I assume that's https://github.com/TuringLang/Bijectors.jl/blob/9cd59070871cc7a29df0e401a24a08502241b230/src/bijectors/simplex.jl#L84 or https://github.com/TuringLang/Bijectors.jl/blob/9cd59070871cc7a29df0e401a24a08502241b230/src/bijectors/simplex.jl#L102. Can you see any immediate issues for Enzyme (apart from that it's a bit "ugly" 😅)?
Marking tests as broken won't be sufficient as long as tests cause Julia segfaults. But of course we could just not test some models and samplers with Enzyme. The unfortunate...
Yes, there's a performance comparison in the linked issue in the Enzyme repo.
Alternatively, you can also define the model with the unpacked arguments and define convenience functions that unpack for you. Something like ```julia @model function multinom_model(x, N) p ~ Uniform(0, 1)...
To me, the setup sounds like [simulation-based inference](http://simulation-based-inference.org/). I'm not familiar with the field too much (just learnt some things recently) but there are definitely approaches for performing approximate Bayesian...
I don't have an immediate answer but generally the implementation in Distributions is much more specialized and hence more efficient than the previous `BernoulliLogit` in Turing (which fell back to...
Maybe the branches in the new `logpdf` code kill performance with ReverseDiff?
No, I meant without two calls of `log1pexp`. Ie. something like ```julia logpdf(d::BernoulliLogit, x::Bool) = -log1pexp(x ? -d.logitp : d.logitp) function logpdf(d::BernoulliLogit, x::Real) logitp = d.logitp z = -log1pexpx(x ==...
😥 What happens if you implement the gradient of the logpdf function for ReverseDiff?