keras-cv
keras-cv copied to clipboard
Add cohen's kappa metric score
Short Description
This function computes Cohen’s kappa, a score that expresses the level of agreement between two annotators on a classification problem. It is defined as
where p_0
is the empirical probability of agreement on the label assigned to any sample (the observed agreement ratio), and p_e
is the expected agreement when both annotators assign labels randomly p_e
. is estimated using a per-annotator empirical prior over the class labels
Papers
- https://journals.sagepub.com/doi/10.1177/001316446002000104
- https://en.wikipedia.org/wiki/Cohen%27s_kappa
Existing Implementations
- https://scikit-learn.org/stable/modules/generated/sklearn.metrics.cohen_kappa_score.html
- https://github.com/tensorflow/addons/blob/master/tensorflow_addons/metrics/cohens_kappa.py
Is this a CV only metric?
ok, not sure, it may be general. I mostly used it in ordinal regression-related CV tasks. Curious, if it's general, what should be the approach to include this?
Curious, if it's general, what should be the approach to include this?
I don't know https://github.com/keras-team/keras-cv/pull/30#issuecomment-1008685239
hmm, that's an issue. @LukeWood thoughts?
@innat We had a similar discussion at https://github.com/keras-team/keras-cv/issues/137#issuecomment-1042903162
See also our threads at: https://github.com/keras-team/keras/issues/16181#issuecomment-1064366843 https://github.com/keras-team/keras-cv/issues/137#issuecomment-1043585248
It's really an intricate issue and needs separate discussion. @LukeWood could you please give some feedback on this?
cc. @fchollet @qlzh727
cc. @LukeWood A gentle reminder here.