Kevin P Murphy
Kevin P Murphy
On a related note, it seems that the log-det-jacobian term is missing from the objective function that is passed to HMC. (At least I could not see the string 'jacfwd'...
they look identical - what should it say?
You are right. I have finally fixed this.
Sure. You don't need to use tfp, and we don't even need bijax. I suggest you code the laplace approximation from scratch, similar to https://github.com/probml/pyprobml/blob/master/notebooks/book1/04/laplace_approx_beta_binom_jax.ipynb which is used to compute...
We plan to reimplement the bandit code on top of our new rebayes library in the new few weeks. Please check back later
Oh I see. It's just a typo. It should be `E [Y ] = E [W X] = E [W] E [X] = 0`.
Actually, in view of the good performance of googlenet, maybe it is worth keeping? :)
I have added some extra tests to test_kalman.jl. They pass locally on my laptop but Travis complains, I don't know why. (I am new to Julia and open source development...)...
The same problem persists in /github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb (see duplicate issue https://github.com/huggingface/diffusers/issues/1922)
B. W. Larsen, S. Fort, N. Becker, and S. Ganguli, “How many degrees of freedom do we need to train deep networks: a loss landscape perspective,” in ICLR, 2022 [Online]....