David J. Harris

Results 33 comments of David J. Harris

Example code: ``` # Gibbs sampling. Update one species (column) at a time, conditional # on everyone else. for(i in 1:iter){ for(j in 1:n.species){ state[ , j] = rbinom( n.sites,...

Special case of #64 that is no longer as interesting.

I think it's still worth adding someday.

Hey @wilkinsond, @goldingn, sorry for the delay getting back to you! Does it look like everything is working now? If not, I'd be happy to help now that I'm not...

Looks like my EM-based updates are very similar to [what David MacKay recommended in his 1995 paper](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.136.4011&rep=rep1&type=pdf) Probable Networks and Plausible Predictions (Equation 26). He does mean square error and...

See also: [Discriminative Transfer Learning with Tree-based Priors](http://www.cs.toronto.edu/~nitish/treebasedpriors.pdf) (Srivastava and Salakhutdinov) for a useful generalization. Looks like they only update the means, not the variances though.

At minimum, should have machinery to update priors (partly implemented), as well as built-in updates for the Gaussian and Laplace priors.

Added; still need to test.

This exists now, but I don't think I've tested it. **Update**: removed until further notice

Currently just test that they converge. Should test that they actually take the path toward the optimum that they're supposed to.