Dustin Tran
Dustin Tran
Interesting. My prediction is SGLD is getting close to boundaries of the support, which causes NaNs. Maybe try 1.3.5 with auto_transform=False? Should have similar results have 1.3.4 if I understand...
There's definitely something here to do, either in the implementation to fix a bug, or to have more helpful error messages. Would love help! On `auto_transform=True`, one hypothesis is that...
Excellent. Thanks @krishkoushik! Will merge after tests pass.
Absolutely. Do you mean something specific by Bayes nets? One suggestion is to write a notebook on something sampling is the de facto approach for----such as the item response theory...
It's not supported unfortunately. Edward 2.0 (https://github.com/blei-lab/edward/pull/825) makes inference computation more transparent, with `sess.run`s specified by the user. So this issue will be resolved when it's released.
> Is there a clever idiomatic way to define the score gradient without compromising its derivative? Yes, there is! I was just chatting with Jakob Foerster last week about getting...
Right, it depends on what you're taking derivatives of—exact first-order gradients (which DiCe solves) or the first-order gradient estimator. For the latter, have you seen Edward2's `klqp` implementation? It avoids...
Looks like the function no longer exists in TF 1.7.0. Following the commit, we may be able to just remove it in the `ed.copy` implementation (https://github.com/tensorflow/tensorflow/commit/9fc9f19428e497f3a297538059804f69996a612e#diff-66fbf57743aed7c8407bcb218db3c491L2477).
We won't get to this in Edward, but may in Edward2 (https://github.com/tensorflow/probability) as we're working on conjugacy.
thanks for the bug report. i'm having trouble reproducing it. i ran ```python import edward as ed ed.get_session() ed.set_seed(42) N=1000 data = ed.models.Normal(5.0,1.0).sample_n(N).eval() sigma2 = ed.models.Gamma([1.0],[1.0]) mu = ed.models.Normal([0.0],[1.0]) x...