pymc4_prototypes
pymc4_prototypes copied to clipboard
Make Basic tensorflow.distributions model
#2
what you are doing is sampling from the prior. Could you try to improve it to include some example using the HMC from tensorflow?
@junpenglao okay, on it!
@junpenglao I am unable to find a HMC distribution in tensorflow. But I did find couple of implementations of it using tensorflow on github. Its currently a feature request in tensorflow's issues.
What do you suggest I do?
HMC is not a distribution, it is the Hamiltonian Monte Carlo sampler. You can find it in https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/bayesflow/python/ops/hmc_impl.py
You can check out their test case to have some inspiration: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/bayesflow/python/kernel_tests/hmc_test.py https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/bayesflow/python/kernel_tests/metropolis_hastings_test.py https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/bayesflow/python/kernel_tests/monte_carlo_test.py
I think it might also help if you open a post on discourse and write down how you aim to implement it and your thought process so we can discuss.
I wonder if we can use tensorflow dataset and propagate gradient through it. To my knowledge there is an option to change input source, that's what we need for bayesian stuff
@junpenglao I now got a basic understanding of what HMC is intuitively here.
Could I use your implementation in pyro here as a guideline?
I will be reading through the tensorflow tests tests. I will open a post on discourse once I have basic idea in mind.
TIA
I think this would be much more powerful if you used a sampler from pymc3 (like HMC or NUTS, these specifically only require logp and dlogp).
I have opened a post here. https://discourse.pymc.io/t/hmc-sampling-in-tensorflow/797
@twiecki Are you suggesting to use the models from tensorflow and the sampling from pymc? I am unable to find posterior sampling functions in tensorflow? Do you suggest I implement HMC or NUTS manually as a test?
there first one. make tensorflow distributions work with the HMC implementation of pymc3. note that that implementation is python only and should just require logp and dlogp.
On Feb 5, 2018 11:22 AM, "Sharan Yalburgi" [email protected] wrote:
@twiecki https://github.com/twiecki Are you suggesting to use the models from tensorflow and the sampling from pymc? I am unable to find posterior sampling functions in tensorflow? Do you suggest I implement HMC or NUTS manually as a test?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/pymc-devs/pymc4_prototypes/pull/4#issuecomment-363041951, or mute the thread https://github.com/notifications/unsubscribe-auth/AApJmN2lSBD-DCpKrvmGwTBggdu5wl0Pks5tRtZogaJpZM4RwrbA .
@twiecki The pymc samplers still require theano variables as input? Is there any way around this?
It probably does require some adaptation. You could copy&paste the code over (HMC is pretty self-contained I think). In general, a good first step is probably to just get the logp and dlogp of a simple model. Then see how that could be passed into the HMC class in pymc3. That's really the work, figuring out how to combine the two.
class Norm():
def __init__(self, mean=0., sd=1.):
self.mean = tf.constant(mean)
self.sd = tf.constant(sd)
self.n = tf.distributions.Normal(loc=self.mean, scale=self.sd)
def sample(self, sample_shape=(), ):
out = self.n.sample(sample_shape)
init = tf.global_variables_initializer()
with tf.Session().as_default() as sess:
init.run()
return out.eval()
def logp(self, sample_shape=(), value=None):
if value == None:
value = self.sample(sample_shape)
logp = self.n.log_prob(value)
init = tf.global_variables_initializer()
with tf.Session().as_default() as sess:
init.run()
return logp.eval()
def dlogp(self, sample_shape=(), value=None):
if value == None:
value = self.sample(sample_shape)
dlogp = tf.gradients(self.n.log_prob(value), [self.mean, self.sd])
init = tf.global_variables_initializer()
with tf.Session().as_default() as sess:
init.run()
return sess.run(dlogp)
Am I on the right track here? @twiecki
@sharanry Yes, that looks like a great start. You then want to be able to evaluate the logp and dlogp for changing values of mu and sd (which are suggested by the sampler). You can see here for some code where we built a wrapper for edward (which probably doesn't work anymore with newer versions): https://github.com/pymc-devs/pymc3/commit/98a2e038d2e83d94556a69d6e169e58f25024528
I don't like the proposed architecture still having no alternatives. The main drawback of this approach is return sess.run(dlogp|sample|logp) that is not symbolic.
personally I just wait for edward 2.0 to release
@twiecki Any specific reason why edward support was removed on PyMC3?
@sharanry the team decided that extensions for third-party packages ought not to be part of the main repository, mainly due to maintenance overhead. Individuals are welcome to extend PyMC3 any way they wish, but it should be a separate project.
It was removed also because they change the API and nobody is able to make it work at the time.
@sharanry GSoC applications are now open if you planned to submit something. There are some quite interesting developments in regards to PyMC4 and TF probability / Edward 2. Would be great to have you involved.