GPflow icon indicating copy to clipboard operation
GPflow copied to clipboard

Speeding up gpflow using mixed precision

Open alexisboukouvalas opened this issue 8 years ago • 2 comments

Has anyone looked at the idea of using mixed precision to speed up computations in GPflow or TensorFlow? For example see algorithm 1 in this paper. The Cholesky would be in float32 for example, and the iterative refinement in float64.

I am brining this up because I have noticed significant speed ups when running GPflow with float32 even on CPU. In one example using Bayesian GPLVM for 64 bits precision the algorithm take 32 seconds to converge whilst it takes only 11 seconds to converge for 32 bits precision.

alexisboukouvalas avatar Nov 24 '17 09:11 alexisboukouvalas

The idea has been floating around. The paper you mention is really interesting, but it probably goes beyond what is possible in the current framework (from a quick skim it looks like this would require editing TF ops).

However, I do believe it's possible to compute certain things in low precision and then do other more sensitive things in high precision. Perhaps we can compute the kernel, or kernel expectations in low precision, and then do the cholesky in high precision? The problem is I don't think anybody has done the numerical analysis to show where this is justified. However, this doesn't stop anybody from just empirically trying it out...

Finally, a hurdle is how to implement this correctly and neatly. It would be nice to have a flexible interface for choosing whether to use high, low or mixed precision.

markvdw avatar Nov 24 '17 10:11 markvdw

I am upvoting, I have seen recently a paper discussing the implementation in PyTorch.

maciejskorski avatar Apr 19 '22 09:04 maciejskorski