Kyle Beauchamp
Kyle Beauchamp
`thin` is how much to subsample the MCMC samples to reduce correlation. The current implementation will only work for dense models. I have ideas about how to scale things up...
Also: the current code uses an Uninformative prior, not a Dirichlet prior. I'm still figuring out the _best_ way to do a Dirichlet prior. The issue is that I need...
Speed is _definitely_ an issue here--these chains are super slow to converge...
It seems like we need ~1-2 minutes to converge for small matrices. I think the code is still worth having, as it's simple and easy to understand. Some error analysis...
So for estimating rates, it might be natural to use an exponential distribution as a prior. This should place maximum prior weight at Kij = 0, inducing sparsity in the...
I just checked in a revised codebase that appears to be working. Here is a driver program that checks all 4 types of model: ``` import pymc import numpy as...
Hm, there might still be something funny with the reversible rate sampler. I'm looking into it.
I think I fixed the issue with the ReversibleRateSampler. I needed to discard the early samples (burn-in). I added this feature and now I think things are working much better....
Here is an updated driver program: ``` import numpy as np import scipy.sparse from msmbuilder import mcmc, MSMLib num_states = 2 alpha = 1 num_steps = 1000000 thin = 500...
I think this is almost ready, so people can comment on what else they want to see.