chemotools
chemotools copied to clipboard
44 Improve `WhittakerSmooth`, `AirPls`, and `ArPls` performance
This pull request primariliy tackles issue #44, but it does not fully close it (see the second point of ๐ถ Next Steps). It should be squashed before merging because it's more than 100 commits.
๐๏ธ Main Feature Changes
๐งโ๐ป Implementations
- complete removal of sparse matrices from
WhittakerSmooth,AirPls, andArPlsand transitioning to the more appropriate LAPACK banded storage format. Why? Because- sparse matrices have a huge overhead and their initialisation alone takes longer (> 1 ... 10 ms) than LAPACK's banded solvers take for a complete smooth (mostly < 1 ms). In contrast, the banded storage can be achieved with dense NumPy-Arrays which has only a small fraction of the initialisation time compared to sparse matrices.
- dense NumPy-Arrays allow for more efficient computation of the penalty matrix
D.T @ Dthan what can be achieved via a sparse matrix, even though the logic becomes a bit more elaborate because symmetry has to be exploited while the redundant computations that make up most of the computation time have to be avoided. - the sparse solver used before (SuperLU) was designed for sparse matrices that can have arbitrary sparsity pattern, but the matrices for the three algorithms have a very defined sparsity pattern, namely they are banded (tridiagonal for difference order 1, pentadiagonal for difference order 2, and so on). Dedicated banded solvers offer highest perfromance here because the algorithm can follow a very well-defined and straightforward pattern right from the start.
- unification of the implementation of
WhittakerSmooth,AirPls, andArPlssince they all rely on the same base algorithm and only differ in weighting strategies. Now, they all inherit from the classchemotools.utils._whittaker_base.main.WhittakerLikeSolverthat handles all the underlying math once in a centralized place. The only thing the 3 transformer classes add now is a customized weighting strategy. Each of the classes uses different access points to the solver depending on the checks/preprocessing they need to conduct. - adding
pentapy-support that will check for availability ofpentapyat runtime and use its high-performance solver for all scenarios where the differences order is 2 (see โฑ๏ธ Timings). - improvement of internal checks and type conversions via class variables (by coincidence related to #87) .
- adding the respective documentation.
โฑ๏ธ Timings
In summary, the speedup with the minimum set of dependencies is ~5x for all algorithms. However, when pentapy is used, the speedup can be up to 15x. Since it is used for difference order 2 and this is the standard use case, this is quite some gain.
Yet, rust-based implementations seem to be even faster, so we definitely did not reach the limit here.
๐ WhittakerSmooth with difference order 1
Speedup of ~5 to 6 times
๐ WhittakerSmooth with difference order 2
Without pentapy - Speedup of ~5 times
With pentapy - Speedup of ~5 to 15 times
๐ดโโ ๏ธ ArPls
Without pentapy - Speedup of ~4 times
With pentapy - Speedup of ~5 to 15 times
๐ฉ๏ธ AirPls with polynomial order 1
Speedup of ~12 to 5 times
๐ฉ๏ธ AirPls with polynomial order 2
Speedup of ~12 to 5 times
With pentapy - Speedup of ~10 to 15 times
๐ถ Next Steps
- numerical stability of the banded solver (Partially Pivoted LU-decomposition) can only be achieved for difference order up to 2 when the size of the spectra grows to common sizes of 1000 to 10000. Beyond these difference orders high
lam-values are no longer possible. Many Whittaker smoother implementations out there suffer from this, but this is something that should be tackled by, e.g., also invoking banded QR-decomposition. - the baseline algorithms use an initailisation which can be far off from the true baseline. Therefore, they take a lot of iterations to converge. Having a good initial guess could solve the problem.
๐ Additional features
Given that this was a lot of refactoring, the chance was used to enrich the WhittakerSmooth by
- adding
sample_weightkeyword argument totransformandfit_transformto allow for locally weighting datapoints depending on their noise level. Basically, this makes theWhittakerSmoothen par withArPlsandAirPlswhich were already able to pass weights. - weights allow for automated determination of
lamvia maximization of the log marginal likelihood (same approach as for sklearn's GaussianProcessRegressor). - a function to estimate the local/global noise levels which can be used for the weighting required for the log marginal likelihood method (
chemotools.smooth.estimate_noise_stddevorchemotools.utils.estimate_noise_stddev). - adding the possibility for a model-based specification of
lamviachemotools.smooth.WhittakerSmoothLambdasimilar to SciPy's Bounds for specifying the bounds of parameters during optimizations - adding a SciPy-like wrapper for banded LU-decompositions (
chemotools.utils._banded_linalg).
๐ฆ๐ Package structure
- the
settings.jsonin the.vscode-folder was removed from the GIT version control. Having a file that can overwrite the user's local settings (which might contain much more than just formatting and linting settings) can be quite destructive. It was replaced by asettings_template.jsonthat can provide the basic setup for the user. - the
requirements.txtwere split into arequirements.txtfor the main package capabilities and arequirements-dev.txtfor the dependencies needed during development. #53 will profit from this, since then one can simply point to therequirements.txtfrompyproject.tomlwithout having to worry about the user accidently installingpytest,matplotlib, etc. - basic linting via
Ruffwas configured in thepyproject.toml(requiressettings.jsonto haveRuffconfigured). It reveals some unused imports, wrong type hints, and non-pythonic statements that should be tackled in the future.
โ โ Tests
- by including
pytest-xdist, the tests can now be run in parallel which saves quite some time. The command I always used for running the tests is
pytest --cov=chemotools .\tests -n=auto --cov-report html -x
- with
pytest.mark.parametrize, the tests were extended by running the same test on multiple input combinations. This was especially useful for transformer functionality and numerics tests.
๐ชค Miscellaneous
- โ๏ธ Fixed a typo and a type hint here and there
@MothNik FANTASTIC - I have been waiting for this day with a lot of enthusiasm ๐ค๐ค!!
I am starting to review it right now, and it is a long review, but I hope I can have it done in about a month from now! It is a very exciting contribution. During the review process, we could also start considering how to add the different improvements to the documentation pages :smile:
The restructure of the package is a good idea, it goes perfectly in line with #53, and I need to get done during the summer, Having the dev dependencies separated is a great starting point! Also nice to hear you have been using Ruff for linting, it was also on my todo list to trancition from black. I did not know about pytest-xdist, but I have started testing it and... it is pretty cool, I like it a lot!๐
I think that now it is my turn, and I have some work to do ๐ฅณ๐ฅณ
@paucablop You are highly welcome ๐ธ
Yes, it's a lot of files. I'm sorry it turned so big ๐ Take all the time you need and just ping me for the documentation pages โ๏ธ
I usually would not do package restructuring in a feature branch, but the branch required some setup for the development environment, especially for the tests โ โ I hope this will help for #53 and also #61 and make the installation easier ๐พ
As I said, take your time ๐ธ
I want to give special credits and thanks for the support by Guillaume Biessy, the author of Revisiting Whittaker-Henderson Smoothing which is - as far as I'm aware - the best review of the Whittaker-Henderson Smoothing out there because it is illustrated very well and focuses on the key points ๐
@paucablop
I'm done with the renaming of all the variables and functions to make them more readable.
Besides, I also added a tiny cheatsheet for testing with pytest as a README.