Tom Minka

Results 47 comments of Tom Minka

The reason for using this approximation is in section 3.1.2 of the paper you linked.

It minimizes KL divergence with the arguments swapped. Is your issue that the code doesn't do what the paper says, or that you disagree with what the paper says to...

Please write out the formula that you think should be minimized. I suspect there is some confusion about what "minimize the KL divergence" actually means.

The issue is that this is not what "minimizing the KL divergence" means. The messages in EP and VMP minimize KL divergence of the approximate posterior to the true posterior...

Message 7 doesn't minimize that objective either. It seems that all of your confusion comes from a misunderstanding of the objective of Variational Message Passing. The objective is explained in...

1. To compute evidence correctly, you must put the If block around the whole model, especially the parameter declarations. The linked code doesn't do that. 2. You are right, the...

It seems that toggling `OptimiseInferenceCode` doesn't fix everything, it just makes some of the estimates better. I will look into a better solution. The full Matchbox Recommender implementation is not...

Damping is required for large numbers of traits. The attached PR shows how to do this.