mlr3pipelines
mlr3pipelines copied to clipboard
feat: hotstart
Nothing to merge yet but a poc we should discuss. mlr3tuning is ready to use hotstarting efficiently and @sebffischer wants to use hotstarting for torch. This currently only works for pipelines that are deterministic until the hotstart model is reached.
Further thoughts on this. Actually, the task would no longer have to be sent through the pipeline during hotstarting. If you cache the task before the hotstart model, you can simply continue there. However, this would somehow require the task to be cached at this point during the first training.