mlr3torch
mlr3torch copied to clipboard
Deep learning framework for the mlr3 ecosystem based on torch
The `lazy_tensor` datatype is currently restrictive in various ways: 1. Currently there can only be a single preprocessing graph for the whole tensor-column. 1. Currently it is assumed that there...
It would be great if continuing of training was possible from checkpoint, currently the optimizer and loss state are not stored.
It would be nice to be able to create a proper deep clone of a trained torch learner. * [x] Support to clone `nn_modules()` has been already added in torch...
Checkout https://github.com/e-sensing/torchopt for more optimizers
should be added when there is the new paradox as this parameter will be used often.
`mlr3torch` would benefit from some additional input checks (e.g. whether the lazy tensor shape works with a specific learner), for that the `mlr3::assert_task_learner()` method would have to call the learner's...