[FIX] Unify API
This is a large refactoring PR and open for discussion. The main goal of the PR is to unify API across different model types, and unify loss functions across different loss types.
Refactoring:
- Fuses
BaseWindows,BaseMultivariateandBaseRecurrentintoBaseModel, removing the need for separate classes and unifying model API across different model types. Instead, this PR introduces two model attributes, yielding four possible model options:RECURRENT(True/False) andMULTIVARIATE(True/False). We currently have a model for every combination except a recurrent multivariate model (e.g. a multivariate LSTM), however this is now relatively simple to add. In addition, this change allows to have models that can be recurrent or not, or multivariate or not on-the-fly, based on users' input. This also allows for easier modelling going forward. - Unifies model API across all models, adding missing input variables to all model types.
- Refactors losses, a.o. removing unnecessary
domain_mapfunctions. - Moves
loss.domain_mapoutside of models toBaseModel - Moves RevINMultivariate used by
TSMixer,TSMixerxandRMoKtocommon.modules
Features:
- All losses compatible with all types of models (e.g. univariate/multivariate, direct/recurrent) OR appropriate protection added.
-
DistributionLossnow supports the use ofquantilesinpredict, allowing for easy quantile retrieval for all DistributionLosses. - Mixture losses (
GMM,PMMandNBMM) now support learned weights for weighted mixture distribution outputs. - Mixture losses now support the use of
quantilesinpredict, allowing for easy quantile retrieval. - Improved stability of
ISQFby adding softplus protection around some parameters instead of using.abs - Unified API for any quantile or any confidence level during predict for both point- and distribution losses.
Bug fixes:
-
MASEloss now works. - Added various protections around parameter combinations that are invalid (e.g. regarding losses).
-
StudentTincrease default DoF to 3 to reduce unbound variance issues. - All models are now tested using a test function on the AirPassengers dataset; in most models we included
eval: falseon the examples whilst not having any other tests, causing most models to effectively not being tested at all. - IQLoss doesn't give monotonic quantiles, now it does (by quantiling the quantiles)
- When training with both a conformal method and non-conformal method, the latter is also cross-validated to compute conformity scores. This is a redundant training step, and removed in this PR.
Breaking changes:
- Rewrite of all recurrent models to get rid of the quadratic (in the sequence dimension) space complexity. As a result, it is impossible to load a recurrent model from a previous version into this version.
- Recurrent models now require an
input_sizeto be given. -
TCNandDRNNare now windows models, not recurrent models.
Tests:
- Added
common._model_checks.pythat includes a model testing function. This function runs on every separate model, ensuring that every model is tested on push.
Todo:
- [x] Test models on speed/scaling as compared to current implementation across a set of datasets.
- [x] Make sure docstring of all multivariate models is updated to reflect the additional inputs
Check out this pull request on ![]()
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB
this is a very cool effort @elephaint. the new features look exciting (eg losses compatibility, model unification, ...) and there are a lot of bug fixes, too!
this is a very cool effort @elephaint. the new features look exciting (eg losses compatibility, model unification, ...) and there are a lot of bug fixes, too!
Thanks! At this moment the issue is mostly that there is some performance regression; little bit of work to do there....
Performance tests - first picture is baseline (existing main repo), second picture is this PR.
Conclusion
- There is some performance regression, on some recurrent models. This is due to the different implementation, where recurrent models are now flexibly implemented as either direct or recurrent models (they have a flag for setting this - by default they are implemented as
direct). I'm ok with this performance regression, as it offers massive gains in space complexity - which is the key benefit / reason to use a recurrent model. - Larger models (multivariate, Transformers) seem to be faster with the current implementation.
Model performance
Model performance 2
Multivariate model performance
I ran my own small experiment, and models are slightly faster, with no degradation in performance. The refactoring looks good to me, but a second opinion would be great on this!
I ran my own small experiment, and models are slightly faster, with no degradation in performance. The refactoring looks good to me, but a second opinion would be great on this!
Thanks!
Todo:
- [x] Check correct functioning of #1233 and add a test for that functionality in the code
- [x] ~~Add #1233 to mixture losses~~ Not in this PR, keep it in backlog
- [x] Check correct functioning of
horizon_weightin other losses