xskillscore
xskillscore copied to clipboard
metrics to add
Keep this comment in this issue updated with metrics to add to xskillscore. Consolidate from other issues as well as comments that appear below. The format for inputting will be:
- METRIC_API_NAME (LONG_METRIC_NAME) [RELATED ISSUE (IF EXISTS)] {METRIC SOURCE/EQUATION}
The full list of metrics current in xskillscore can be found here. Remove issues from here once they are added.
Correlation Metrics
-
pearson_r_auto
(Pearson R Autocorrelation) [#205] {https://github.com/bradyrx/esmtools/blob/master/esmtools/stats.py#L171 }
Distance Metrics
-
medape
(Median Absolute Percentage Error) -
rmspe
(Root Mean Square Percentage Error) [#46] {https://www.kaggle.com/c/rossmann-store-sales/overview/evaluation } -
msle
(Mean Squared Log Error) [#47] {https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_log_error.html } -
crmse
(Centered Root Mean Square Error) {https://solarforecastarbiter.org/metrics/#crmse} -
maape
(Mean Arctangent Absolute Percentage Error) [#85] {https://github.com/xarray-contrib/xskillscore/issues/85#issuecomment-653135691 } -
explained_var
(Explained Variance Score) [#213] {https://scikit-learn.org/stable/modules/model_evaluation.html#explained-variance-score } -
mean_pinball
(Mean Pinball Loss) [#274] {https://scikit-learn.org/dev/modules/generated/sklearn.metrics.mean_pinball_loss.html}
Probabilistic Metrics
-
brier_skill_score
(Brier Skill Score) [#49] {https://github.com/xarray-contrib/xskillscore/issues/49#issue-526878892 } - Add
fair
arg tocrps_ensemble
[#260] {? } -
crpss
(Continuous Ranked Probability Skill Score) [#49] {https://github.com/pangeo-data/climpred/blob/main/climpred/metrics.py#L2176} -
rpss
(Ranked Probability Skill Score) [#49] {https://github.com/pangeo-data/climpred/blob/main/climpred/tests/test_probabilistic.py#L252} - brier score decomposition https://github.com/csiro-dcfp/doppyo/blob/6c423b32ce013933072fb1c176e502a16de15fa2/doppyo/skill.py#L868
Dichotomous-Only (yes/no) Metrics
-
f1
(f1 Score) [#138] {https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html#sklearn.metrics.f1_score } -
tpr
(True Positive Rate/Recall Score) [#138] {https://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html#sklearn.metrics.recall_score } -
precision
(Positive Predictive Value/Precision Score) {https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html#sklearn.metrics.precision_score } -
rel_value
(Relative Value Score) [#229] {https://www.ecmwf.int/sites/default/files/elibrary/2007/15489-verification-probability-forecasts.pdf }
Multi-Category Metrics
-
mc_threat_score
(Multi-Category Threat Score) [#187] {?}
Comparative
-
ttest_ind
(T-test for the means of two independent samples of scores) [#175] {https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ttest_ind.html }
Resampling
Metric glossaries:
- https://solarforecastarbiter.org/metrics/
what about brier_score
with decomposition? nice explanation in https://timvangelder.com/2015/05/18/brier-score-composition-a-mini-tutorial/ implemented in https://github.com/csiro-dcfp/doppyo/blob/8c620889bde8a21d5937eb7ac71e72a040b31867/doppyo/skill.py#L868 with loop. do you think there is a solution without a loop @dougiesquire ?
Sorry I've been on leave. I don't remember much about the brier_score
implementation in doppyo, but I'm sure it could be improved. At the very least, we could implement the loop with numpy arrays and then wrap with apply_ufunc
as is done for many of the other functions in xskillscore
- cost loss C/L metric