Max Balandat

Results 476 comments of Max Balandat

If `N` is the input dimension, then yes, `10 x 1 x N` is the right shape to pass to the model for batch evaluation. Though I'm not sure the...

@jpchen I think generally the answer is yes, happy to consider if they are more generally useful. Can you point me to the implementations? Re the white noise kernel, we...

I can also look at the internal ones :)

You could try a standard multi-task GP model here where the individual would be the “task”

So for a Hadamard-type multi-task model (rather than a Kronecker-type one) you don't need to have the observations of the different tasks at the same locations. In a sense there...

1. https://github.com/cornellius-gp/gpytorch/blob/master/examples/03_Multitask_Exact_GPs/Hadamard_Multitask_GP_Regression.ipynb 2. Yes 3. You may be ok then - it's going to take a while to fit the model but if you don't need it to be very...

Hmm I see what's going on, this is pretty nasty. Basically we're properly permuting the batch dimensions of the input tensors, but we don't do anything to the `batch_shape` of...

@sdaulton, @dme65 I am not sure what happens exactly when this is not hitting the max eager threshold, but I wonder if it could be that we're unnecessarily broadcasting even...

@gpleiss, @jacobrgardner do either of you know whether this has ever worked properly in this setting? Or is batch-evaluating a batched model with different batch dimensions and then lazily handling...

Hmm yeah interesting idea. I think there might be a few things to be careful about - e.g. we will need to have things like `.size()` and `.shape()` return the...