Arno Veletanlic

Results 18 comments of Arno Veletanlic

Hi, I am a little bit busy atm, however I can point to two possible sources for an easy implementation: * a [python copy](https://github.com/amber0309/HSIC) of the original Gretton et al....

If I were to do it my way, I would build a kernel in which the latent variables are extra parameters to optimise over, but I wouldn't know how to...

Wouldn't you say that 2 layer GP is sufficient? g(f(X)+e1) +e2 = Y allows me to fit both a noise distribution and a mapping (fix X=x, change g so that...

Hi, I've tried to run the example you gave, and the following issue appeared: ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) in fit_most_likely_HeteroskedasticGP(train_X, train_Y, covar_module, num_var_samples, max_iter, atol_mean, atol_var)...

Hi again, I ran the code without knowing whether it was correct and obtained a very similar fit for 20,30,40,50 or 60 runs on my data: ![test_htr](https://user-images.githubusercontent.com/48685588/70302819-f839e380-1838-11ea-8218-eba013cc526a.png) Compared to a...

For reference: [the collab link showing how to use a simpler model](https://colab.research.google.com/drive/1dOUHQzl3aQ8hz6QUtwRrXlQBGqZadQgG#scrollTo=V1MorMsJQa8Z) uses the following snippet to observe the variance: ```python with torch.no_grad(): # watch broadcasting here observed_var = torch.tensor(...

**Another Related question:** If I wanted to save the previous heteroskedastic model and resume training instead of fitting a new one, would there be a principled/generic way of doing so?...

Hi @Balandat thanks for this detailed answer! About "warm starting", I mentioned it because the paper suggests to "set G1=G3", which might imply that the current parameters of G3 should...