mlforecast
mlforecast copied to clipboard
Tuning slow for huge data sets because backtest splits are created for each round
Description
For data sets with huge amounts of data, the tuning can be slow, because the backtest is created for each tuning round (as far as I see it) in mlforecast_objective.
Th core fitting methods are often fast because they are based on fast implementations from lgbm or xgb for example. But the backtest split can be a bottleneck.
I propose to leave the backtest split function as it is, but to re-use created splits (somehow) so the splits don't have to be created multiple times (as suggested by @yherin)
Use case
Faster tuning process due to reduction of multiple calculations of backtest splits.