Avi Bryant
Avi Bryant
Worth also looking into https://gitlab.com/radfordneal/xsum
https://github.com/polynote/polynote/blob/master/polynote-runtime/src/main/scala/polynote/runtime/python/PythonObject.scala#L102
https://github.com/stripe/rainier/pull/433 will partially address this. The other thing we should do is analyze whether or not it makes sense to distribute the likelihood during `observe` when we have a `Fn`....
Alternatively, implement dHMC: https://xianblog.wordpress.com/2017/07/03/hamiltonian-mc-on-discrete-spaces/
@antoniakon thanks for the report. I'll try to reproduce and figure out what's going on. In the meantime, it will probably work fine for you to just use `Cauchy(0,1)` followed...
@antoniakon can you show me the full code for the model that produced those density plots? (Since they don't seem to be plots for a standard half-cauchy). I can't find...
* Could also have output0 ... outputN methods on CompiledFunction. * This probably isn't reasonable given that with gradients, we need targets*parameters outputs. * However if we generated output methods...
Should also warn when doing this inside a generator.
I wonder if your clojure port is missing this part: ``` A key that starts with "n:" represents the last known time a message has been observed from a given...
How does this work relate to https://github.com/twitter/scalding/blob/b1d99378b25b27fe128cb083e46032c83e9e8a88/scalding-core/src/main/scala/com/twitter/scalding/mathematics/TypedSimilarity.scala, which also includes a simple graph abstraction? It might be informative to see what those algorithms look like implemented in terms of this...