david graff
david graff
I'm in favor of closing this PR and moving `nn.LossFunction` over to using [`extra_repr()`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.extra_repr) rather than defining some custom `__repr__()` method for this class. This is the "official" way to...
> > I'm in favor of closing this PR and moving `nn.LossFunction` over to using [`extra_repr()`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.extra_repr) rather than defining some custom `__repr__()` method for this class. This is the "official"...
I had the same idea. It could get quite verbose on the command line, but we require users to do this already in `chemprop train`, so I don't think it's...
good point re: batch norm. Do you still observe the same phenomenon when setting `overfit_batches=1`? Also the stored statistics of batch norm use the biased standard deviation during training and...
hmm that's odd re: `shuffle` impacting performance. There should be no order-dependence on predictions/performance. Also, I did mean `overfit_batches=1`. This is so that the fitted statistics of the batch norm...
Broad question: is chemprop an ML repo or an active learning repo? To do active learning, you'll need a completely new object model (optimizer, history, acquisition function, design space, etc.)...
I've been wondering about this as well. Could we think about potentially splitting up the atom featurization into `label` + "additional features" where `label` is fed through an `nn.Embedding(NUM_ATOMS)` and...
I've been titling my issues with [v2]. Should we stop doing that and just add a `v1` tag to demarcate issues as v1 issues now?
I'm a bit of a broken record at this point, but IMO this features falls under "reinventing the wheel." Users can already control GPU visibility via the `CUDA_VISIBLE_DEVICES` environment variable....
Also, Lightning can auto-select underutilized GPUs if you only tell it the number that you want. If we stick to the paradigm of having users select _indices_, then we lose...