climate_indices icon indicating copy to clipboard operation
climate_indices copied to clipboard

added log-logistic/fisk distribution to SPEI

Open Emmadd opened this issue 2 years ago • 10 comments

My first attempt to include the Fisk distribution into SPEI as suggested in #106, copying as mush as possible from the PearsonIII implementation. The last @numba.jit leads to problems, but code runs fine without. I have not created any tests or stuff yet. Hope someone can have a look :-) let me know what you think and what should be done.

Emmadd avatar Aug 26 '21 10:08 Emmadd

I was wondering as I was making changes to the docs whether they should not include the possibility of pre-computing SPEI distribution fitting variables. This seems possible in the code to me.

Emmadd avatar Sep 09 '21 07:09 Emmadd

I was wondering as I was making changes to the docs whether they should not include the possibility of pre-computing SPEI distribution fitting variables. This seems possible in the code to me.

Yes, I agree, we have done this already for SPI, there is no reason we couldn't do something very similar for SPEI. Good idea!

monocongo avatar Sep 09 '21 14:09 monocongo

Hmmm, the tests that fail are for SPI and Pearson, are they related to things I've changed in the branch? =========================== short test summary info ============================ 331 FAILED tests/test_compute.py::test_transform_fitted_pearson - AssertionError: 332 FAILED tests/test_indices.py::test_spi - AssertionError: 333 ================== 2 failed, 34 passed, 11 warnings in 2.69s ===================

Emmadd avatar Sep 14 '21 13:09 Emmadd

Yes, almost certainly. This is why the tests are in place, to shield from introducing changes that will give different results from before. In this case, however, we're expecting to see different values since we're using a new distribution, so we should come up with an entirely new test result. For example, we expect certain results when using the Pearson fitting and other results when using gamma. We now need to come up with an expected result when we use this new fitting, so when the next person changes some code we'll have a test in place to make sure they haven't broken things for this distribution fitting. So for this new distribution, we need to have a separate collection of tests that exercise the new code and produce a result we can feel confident about. Once this is in place then all new code must pass the test, otherwise, we'll know we're introducing a significant/breaking change.

With all that being said I'm not certain that the code you've introduced here is actually breaking things. These checks went off the rails a year or so ago and I just haven't had time to do the requisite house cleaning to get us back on track. I expect what's up here is that you may have broken some things and others were already broken. I can run these tests locally to debug but it will take time, my day job is demanding. I totally appreciate your help with this package, don't forget that many (usually invisible) others will benefit!

monocongo avatar Sep 14 '21 13:09 monocongo

No, @Emmadd I don't think your new code has caused any of the existing tests to fail. Several of them were broken before you started this work, and I have not found the time to maintain the package to get that back in order. Your new code looks legit (very nice work, thanks for all the comments!) and my only recommendation at this point is to add a few tests into the test suite for this new distribution fitting. For example, in tests/test_compute.py we have test_gamma_parameters and test_pearson_parameters so now we should add a test_fisk_parameters function, etc. Once those are in place and working as expected then I will approve this merge. Hopefully, I will soon find the motivational impetus to get the rest of our tests back in shape and this project back on the rails!

monocongo avatar Sep 15 '21 15:09 monocongo

There was a fix merged today (#450) which may fix the failing tests here. I'm not sure how to get the checks to run again (the instructions in GitHub help pages didn't include the same buttons as I see in the web UI etc.), I think it can't happen since the original run of the checks happened over 30 days ago. Maybe you can pull from master to get the latest changes, then make a commit of that to this fork branch, which would then trigger a new check?

monocongo avatar Nov 02 '21 18:11 monocongo

You can merge master back into your branch. Then push your branch back up. That will retrigger tests.

Turns out the science lead on my project is also interested in this particular distribution, so I'll let you know if he has any feedback. I should double-check the implementation here because it borrows from the pearson III code which I had to fix.

As for the option to save out the fitted params for SPEI, stay tuned... I have a draft branch with that feature for pearson3 and gamma.

WeatherGod avatar Nov 02 '21 20:11 WeatherGod

Oh, before merging master back into your branch, you should fetch master from this repo. Let me know if you have questions!

WeatherGod avatar Nov 02 '21 20:11 WeatherGod

This PR also got me thinking about my work-in-progress adding SPEI support to the "spi" script. Right now, the spi script pre-allocates fitting arrays and stores them in a dictionary keyed by their parameter name. With the two current distributions, the fitting parameter names do not collide with each other. However, it seems like this distribution would have colliding parameter names. I'll think on this a bit, but an elegant solution may require a significant overhaul of the "spi" script, and possibly of the other main script, too (which isn't a bad thing!).

In addition, we should also consider updating the API and CLI so that one can select which distributions to compute for in order to keep computational and memory costs down.

WeatherGod avatar Nov 03 '21 14:11 WeatherGod

SonarCloud Quality Gate failed.    Quality Gate failed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 8 Code Smells

No Coverage information No Coverage information
16.0% 16.0% Duplication

idea Catch issues before they fail your Quality Gate with our IDE extension sonarlint SonarLint

sonarcloud[bot] avatar Jun 29 '23 16:06 sonarcloud[bot]