xskillscore
xskillscore copied to clipboard
Feature request: Skill Scores
https://confluence.ecmwf.int/display/FUG/12.B+Statistical+Concepts+-+Probabilistic+Data
I think it's just:
def brier_skill_score(obs, fct, ref, threshold):
bscore_fct = (xs.brier_score(obs > threshold, (fct > threshold).mean('member'))
bscore_ref = (xs.brier_score(obs > threshold, (ref > threshold).mean('member'))
bskill = 1 - (bscore_fct / bscore_ref)
return bskill
where ref
can be persistence or climatology (where each year of the climatology is a member)
You can see how @matteodefelice did it in his notebook (https://colab.research.google.com/drive/1wWHz_SMCHNuos5fxWRUJTcB6wqkTJQCR)
brier_score = xs.brier_score(obs > obs.quantile(2/3, dim='time'),
(fct > fct.quantile(2/3, dim='time')).mean("member"))
baseline = xs.brier_score(obs > obs.quantile(2/3, dim='time'), 2/3)
bss = 1 - (brier_score / baseline)
Yeah, I think both ways are valid, just a different baseline; maybe his can be used as the default if ref=None
def brier_skill_score(obs, fct, threshold, ref=None):
bscore_fct = (xs.brier_score(obs > threshold, (fct > threshold).mean('member'))
if ref is None:
bscore_ref = xs.brier_score(obs > obs.quantile(2/3, dim='time'), 2/3)
else:
bscore_ref = (xs.brier_score(obs > threshold, (ref > threshold).mean('member'))
bskill = 1 - (bscore_fct / bscore_ref)
return bskill
I think we should somehow come up with a pattern/wrapper that translates every skill into a skill score.
- Brier Score and Brier Skill Score
- RMSE and RMSE Skill Score
- ...
The formula would be from https://www.cawcr.gov.au/projects/verification/
anyone ideas how to do this with few lines of code?
Is it applicable across all scores though? I am only familiar with brier skill score (first time heard of RMSE skill score)
I think so. In climpred we have a metrics class with attributes min max perfect. Maybe this would be nice to move to xskillscore
I think this sort of thing should be at climpred
. a
and b
in xskillscore
are two time series for instance. Let's say a
is the forecast and b
the verification product. To get BSS, you'd evaluate a
against b
, and then a persistence derivation of b
against b
for instance. Since we manage reference forecasts at climpred
I'd think over there we would just call xskillscore
BS twice.
I guess with the addition of the sign test, this could still fit here. If you hand xskillscore
two different forecasts (one being the dynamical and the other the reference) you can compute skill scores. climpred
would then wrap that to do nice looping through all the leads.
this is could be a template: https://github.com/bradyrx/climpred/blob/8a3fc954df2043f998987a1964059a9dc0d2e11c/climpred/metrics.py#L128-L140 were we would need the perfect
attribute.
But many we should overdo things. xs
is perfect for doing easy functions xs.metric(a,b,dim)
. skill scores can actually really easy be calculated from this manually. I dont see a large computational benefit of implementing this compared to having this sequentially manual_scoring_function(xs.metric(a,b,dim), reference(*args,**kwargs))
.
We at least shouldnt change the existing API when implementing this.
See https://github.com/xarray-contrib/xskillscore/issues/284. Leaving open as good discussion here.
By the way, it seems that the line:
baseline = xs.brier_score(obs_final > obs_final.quantile(2/3, dim = 'time'), 2/3)
doesn't work any more, I get this error on my Colab:
AttributeError Traceback (most recent call last)
<ipython-input-68-ad9dcdfd6ce1> in <module>()
----> 1 baseline = xs.brier_score(obs_final > obs_final.quantile(2/3, dim = 'time'), 2/3)
/usr/local/lib/python3.8/site-packages/xskillscore/core/probabilistic.py in brier_score(observations, forecasts, member_dim, fair, dim, weights, keep_attrs)
348 res = (e / M - o) ** 2 - e * (M - e) / (M ** 2 * (M - 1))
349 else:
--> 350 if member_dim in forecasts.dims:
351 forecasts = forecasts.mean(member_dim)
352 res = xr.apply_ufunc(
AttributeError: 'float' object has no attribute 'dims'
EDIT: yes, now it requires an xarray object and it doesn't work any more with numbers.
Thanks for reporting @matteodefelice Can you please open an issue with this? Conversion from number to xr.dataarray can be easily added then
Has anyone an example on how to calculate the skill score for the RPS or CRPS? I still haven't found the way to do that...
We have crpss in climpred.
For crpss look into the proper scoring example
@matteodefelice I believe rpss is (kind of) in climpred as well https://github.com/pangeo-data/climpred/blob/main/climpred/tests/test_probabilistic.py#L252. crpss is here: https://github.com/pangeo-data/climpred/blob/main/climpred/metrics.py#L2176 Porting here would be appreciated.
I do RPSS from RPS here: https://renkulab.io/gitlab/aaron.spring/s2s-ai-competition-bootstrap/-/blob/master/notebooks/verification_RPSS.ipynb
Thanks @aaronspring but the real difficulty for me is calculating the RPS of the climatology, but I will find a way. Thanks to everyone for sharing.
generalized for perfect-models and hindcasts.
https://github.com/pangeo-data/climpred/blob/14f7458c1f2e944990d7008001adab49480fe07d/climpred/reference.py#L106
I create a fake 1-member forecast from groupby(dayofyear).
that's a good idea. I did it, and the results seem reasonable. Given that I am working with seasonal averages (one point per year), this is what I have done:
cat_edges = obs.quantile(q = [1/3, 2/3], dim = 'year').rename({'quantile':'category_edge'})
rps_clim = xskillscore.rps(obs, obs.mean(dim = 'year').expand_dims({'member':1}), cat_edges, dim = 'year', member_dim='member')
The RPSS is comparable with the BSS, so I think this should work.