chemotools icon indicating copy to clipboard operation
chemotools copied to clipboard

44 Improve `WhittakerSmooth`, `AirPls`, and `ArPls` performance

Open MothNik opened this issue 1 year ago โ€ข 4 comments
trafficstars

This pull request primariliy tackles issue #44, but it does not fully close it (see the second point of ๐Ÿšถ Next Steps). It should be squashed before merging because it's more than 100 commits.

๐Ÿ—๏ธ Main Feature Changes

๐Ÿง‘โ€๐Ÿ’ป Implementations

  • complete removal of sparse matrices from WhittakerSmooth, AirPls, and ArPls and transitioning to the more appropriate LAPACK banded storage format. Why? Because
    • sparse matrices have a huge overhead and their initialisation alone takes longer (> 1 ... 10 ms) than LAPACK's banded solvers take for a complete smooth (mostly < 1 ms). In contrast, the banded storage can be achieved with dense NumPy-Arrays which has only a small fraction of the initialisation time compared to sparse matrices.
    • dense NumPy-Arrays allow for more efficient computation of the penalty matrix D.T @ D than what can be achieved via a sparse matrix, even though the logic becomes a bit more elaborate because symmetry has to be exploited while the redundant computations that make up most of the computation time have to be avoided.
    • the sparse solver used before (SuperLU) was designed for sparse matrices that can have arbitrary sparsity pattern, but the matrices for the three algorithms have a very defined sparsity pattern, namely they are banded (tridiagonal for difference order 1, pentadiagonal for difference order 2, and so on). Dedicated banded solvers offer highest perfromance here because the algorithm can follow a very well-defined and straightforward pattern right from the start.
  • unification of the implementation of WhittakerSmooth, AirPls, and ArPls since they all rely on the same base algorithm and only differ in weighting strategies. Now, they all inherit from the class chemotools.utils._whittaker_base.main.WhittakerLikeSolver that handles all the underlying math once in a centralized place. The only thing the 3 transformer classes add now is a customized weighting strategy. Each of the classes uses different access points to the solver depending on the checks/preprocessing they need to conduct.
  • adding pentapy-support that will check for availability of pentapy at runtime and use its high-performance solver for all scenarios where the differences order is 2 (see โฑ๏ธ Timings).
  • improvement of internal checks and type conversions via class variables (by coincidence related to #87) .
  • adding the respective documentation.

โฑ๏ธ Timings

In summary, the speedup with the minimum set of dependencies is ~5x for all algorithms. However, when pentapy is used, the speedup can be up to 15x. Since it is used for difference order 2 and this is the standard use case, this is quite some gain. Yet, rust-based implementations seem to be even faster, so we definitely did not reach the limit here.

๐ŸŒŠ WhittakerSmooth with difference order 1

Speedup of ~5 to 6 times lam_1-00e+02_diff_1_pentapy_False

๐ŸŒŠ WhittakerSmooth with difference order 2

Without pentapy - Speedup of ~5 times lam_1-00e+02_diff_2_pentapy_False

With pentapy - Speedup of ~5 to 15 times lam_1-00e+02_diff_2_pentapy_True

๐Ÿดโ€โ˜ ๏ธ ArPls

Without pentapy - Speedup of ~4 times lam_1-00e+10_diff_2_pentapy_False

With pentapy - Speedup of ~5 to 15 times lam_1-00e+10_diff_2_pentapy_True

๐Ÿ›ฉ๏ธ AirPls with polynomial order 1

Speedup of ~12 to 5 times lam_1-00e+03_poly_1_pentapy_False

๐Ÿ›ฉ๏ธ AirPls with polynomial order 2

Speedup of ~12 to 5 times lam_1-00e+03_poly_2_pentapy_False

With pentapy - Speedup of ~10 to 15 times lam_1-00e+03_poly_2_pentapy_True

๐Ÿšถ Next Steps

  • numerical stability of the banded solver (Partially Pivoted LU-decomposition) can only be achieved for difference order up to 2 when the size of the spectra grows to common sizes of 1000 to 10000. Beyond these difference orders high lam-values are no longer possible. Many Whittaker smoother implementations out there suffer from this, but this is something that should be tackled by, e.g., also invoking banded QR-decomposition.
  • the baseline algorithms use an initailisation which can be far off from the true baseline. Therefore, they take a lot of iterations to converge. Having a good initial guess could solve the problem.

๐ŸŽ Additional features

Given that this was a lot of refactoring, the chance was used to enrich the WhittakerSmooth by

  • adding sample_weight keyword argument to transform and fit_transform to allow for locally weighting datapoints depending on their noise level. Basically, this makes the WhittakerSmooth en par with ArPls and AirPls which were already able to pass weights.
  • weights allow for automated determination of lam via maximization of the log marginal likelihood (same approach as for sklearn's GaussianProcessRegressor).
  • a function to estimate the local/global noise levels which can be used for the weighting required for the log marginal likelihood method (chemotools.smooth.estimate_noise_stddev or chemotools.utils.estimate_noise_stddev).
  • adding the possibility for a model-based specification of lam via chemotools.smooth.WhittakerSmoothLambda similar to SciPy's Bounds for specifying the bounds of parameters during optimizations
  • adding a SciPy-like wrapper for banded LU-decompositions (chemotools.utils._banded_linalg).

๐Ÿ“ฆ๐Ÿ“‚ Package structure

  • the settings.json in the .vscode-folder was removed from the GIT version control. Having a file that can overwrite the user's local settings (which might contain much more than just formatting and linting settings) can be quite destructive. It was replaced by a settings_template.json that can provide the basic setup for the user.
  • the requirements.txt were split into a requirements.txt for the main package capabilities and a requirements-dev.txt for the dependencies needed during development. #53 will profit from this, since then one can simply point to the requirements.txt from pyproject.toml without having to worry about the user accidently installing pytest, matplotlib, etc.
  • basic linting via Ruff was configured in the pyproject.toml (requires settings.json to have Ruff configured). It reveals some unused imports, wrong type hints, and non-pythonic statements that should be tackled in the future.

โœ…โŒ Tests

  • by including pytest-xdist, the tests can now be run in parallel which saves quite some time. The command I always used for running the tests is
pytest  --cov=chemotools .\tests  -n=auto --cov-report html -x
  • with pytest.mark.parametrize, the tests were extended by running the same test on multiple input combinations. This was especially useful for transformer functionality and numerics tests.

๐Ÿชค Miscellaneous

  • โœ๏ธ Fixed a typo and a type hint here and there

MothNik avatar May 24 '24 13:05 MothNik

@MothNik FANTASTIC - I have been waiting for this day with a lot of enthusiasm ๐Ÿค“๐Ÿค“!!

I am starting to review it right now, and it is a long review, but I hope I can have it done in about a month from now! It is a very exciting contribution. During the review process, we could also start considering how to add the different improvements to the documentation pages :smile:

The restructure of the package is a good idea, it goes perfectly in line with #53, and I need to get done during the summer, Having the dev dependencies separated is a great starting point! Also nice to hear you have been using Ruff for linting, it was also on my todo list to trancition from black. I did not know about pytest-xdist, but I have started testing it and... it is pretty cool, I like it a lot!๐Ÿ˜Ž

I think that now it is my turn, and I have some work to do ๐Ÿฅณ๐Ÿฅณ

paucablop avatar May 24 '24 16:05 paucablop

@paucablop You are highly welcome ๐Ÿ˜ธ

Yes, it's a lot of files. I'm sorry it turned so big ๐Ÿ˜… Take all the time you need and just ping me for the documentation pages โœ๏ธ

I usually would not do package restructuring in a feature branch, but the branch required some setup for the development environment, especially for the tests โœ…โŒ I hope this will help for #53 and also #61 and make the installation easier ๐Ÿ’พ

As I said, take your time ๐Ÿ˜ธ

MothNik avatar May 24 '24 16:05 MothNik

I want to give special credits and thanks for the support by Guillaume Biessy, the author of Revisiting Whittaker-Henderson Smoothing which is - as far as I'm aware - the best review of the Whittaker-Henderson Smoothing out there because it is illustrated very well and focuses on the key points ๐Ÿ™

MothNik avatar May 25 '24 22:05 MothNik

@paucablop I'm done with the renaming of all the variables and functions to make them more readable. Besides, I also added a tiny cheatsheet for testing with pytest as a README.

MothNik avatar Jun 24 '24 21:06 MothNik