Wesley Maddox

Results 69 comments of Wesley Maddox

Do you mind sharing a reproducible example of this behavior? I wasn't immediately able to produce significant differences when using the model definitions above.

Yes, this is because your dataloader is causing the training inputs to change at each iteration (aka you're trying to use stochastic gradient descent). To be able to not evaluate...

Looks like the failing unit test was flaky.

Yes, this is pretty possible (I have some research code doing exactly deep ensemble posteriors). In general, the mean and variance can be calculated using sample mean and variance of...

In case you haven't already implemented it. I've managed to open-source a deep ensemble posterior class [here](https://github.com/samuelstanton/lambo/blob/dec2f303d847efefc4392ebfd2d78226d543388c/lambo/models/deep_ensemble.py#L177) that should be pretty generic and works with batching (the other code inthe...

Yeah, here's roughly the link to the overall model class (https://github.com/samuelstanton/lambo/blob/7b67684b884f75f7007501978c5299514d0efb75/lambo/optimizers/pymoo.py#L343). As I think I mentioned previously, we were using genetic algorithms to optimize everything b/c the tasks we considered...

Weird, I can't replicate on my mac, but could replicate on another linux server before pulling to botorch (from 353f3764) / gpytorch master (from 32cde571). Then, a fresh install of...

I did not. Just tried replicating on botorch (from https://github.com/pytorch/botorch/commit/353f37649fa8d90d881e8ea20c11986b15723ef1) / gpytorch master (from 32cde571) and was able to reproduce on my mac. Actually the first posterior call fails there...

I don't have a full understanding of what's going on yet but it seems to be related to how botorch internally is handling the batching: ```python with gpytorch.settings.debug(False): model(train_x) print(model.input_transform.mins.shape)...