Sait Cakmak
Sait Cakmak
abandoned
This is just a product of how these transforms are implemented. Their attributes not cached the way GPyTorch caches the `train_train_covar` etc, but are buffers (or parameters in the case...
Ok, here's what's happening. After the model training, an `mll.eval()` call triggers this [bit of code](https://github.com/pytorch/botorch/blob/main/botorch/models/model.py#L186-L228) that updates `model.train_inputs` with the transformed inputs. When you reload the state dict, you...
Another thing to note is currently in `eval` model, for the input transforms to be applied, you should call the model through `model.posterior`. Otherwise, the input transforms will only be...
I haven't touched the input transform refactor diff in quite a while. IIRC, It was most of the way there, just needed cleanup of some remaining models & test. It'd...
Hi @AdrianSosic. Thanks for sharing the simple repro. This does reproduce for me on both 0.12.0 and 0.13.0. The memory usage climbs up a few GP per replication until it...
I think I've identified the part that causes the memory leak but I don't yet know why. Reduced the repro all the way to evaluations of `qNIPV`, and added a...
That can be simplified further. No need for the acqf. ``` from contextlib import nullcontext from botorch.models import SingleTaskGP import torch from botorch import settings from botorch.sampling.normal import SobolQMCNormalSampler from...
And we can reproduce directly with a single gpytorch context manager: ``` from botorch.models import SingleTaskGP import torch from botorch.sampling.normal import SobolQMCNormalSampler from gpytorch import settings as gpt_settings from torch...
That's my guess as well. The context manager seems to control `detach` calls for some mean and covar caches within the GPyTorch prediction strategy. I'll look into it more today