ottonemo
ottonemo
readthedocs analytics says that we have several search results that yield little or no useful results. Let's improvethose: - gpu (only 2 results): make sure that explanation of `device` parameter...
For example: inferno.dataset.get_len([[(1,2),(2,3)],[(4,5)], [(7,8,9)]]) expected: 3 actual: `ValueError: Dataset does not have consistent lengths.` Another example: inferno.dataset.get_len([[(1,2),(2,3)],[(4,5)], [(7,8)]]) expected: 3 actual: 2 (length of tuples) A workaround is to convert...
With PyTorch 1.2.0 came [`IterableDataset`][1] which only implements `__iter__` but no `__len__` and certainly no `__getitem__`. This is definitely a problem since we are using `Subset` to split the input...
I see the code device = ‘cuda’ if torch.cuda.is_available() else ‘cpu’ repeated often in user code. Maybe we should introduce `device='auto'` exactly for this case?
As discussed in https://github.com/skorch-dev/skorch/issues/524 it is desirable to be able to tune the transform parameters of dataset transformations like the ones from torchvision using parameter searches. For this we should...
I think this is a topic that is relevant to many users and we can document the multiple output case there as well. See #428 as well.
Whenever the user uses subsets of the validation data (or different data altogether) the caching will produce wrong results. Sometimes this is caught automatically when there is a mismatch of...
There is a possible disconnect between ordering of `y` and `X` when using a skorch net as transformer in a sklearn pipeline while enabling shuffling on the valid iterator which...
We need a method (possibly on the wrapper class) to initialize the random state for all components that are concerned with sampling. These include - the model (e.g. weight init,...
There was an active discussion in #14 about the drawbacks of passing initialized objects, such as callbacks, to the `NeuralNet` constructor. Let's have a focused discussion about this issue here....