Yoav Katz
Yoav Katz
Makes sense. Can you create a PR?
@dafnapension @elronbandel - Can you explain the motivation for this PR? What are standard metrics and how do they relate to the existing metrics?
@elronbandel @dafnapension - I'm sure you discussed it between you alot, but I want to provide a different perspective. I think stream in unitxt may be useful if unitxt used...
I think different tasks have both different defaults and different field names. It's a question of whether adding a new concept will have a significant advantage over just reusing the...
If I understand there are two changes here. 1. The ability to add constant fields to the datasets via the recipe (what happens If it overwrites an existing field?) 2....
The HF metric calls scikit-learn: https://huggingface.co/spaces/evaluate-metric/matthews_correlation/blame/0da51560adeb410656ba31b4cd1807c990898398/matthews_correlation.py from sklearn.metrics import matthews_corrcoef def _compute(self, predictions, references, sample_weight=None): return { "matthews_correlation": float(matthews_corrcoef(references, predictions, sample_weight=sample_weight)), } [[ Dot it ](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.matthews_corrcoef.html)](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.matthews_corrcoef.html)
I think the issue that in v=[0,0,0] or v=[1,1,1] - there is only a single class. This is a special case not treated in the implementation.
This seems to be a known issue that has a PR , but was not fixed. https://github.com/scikit-learn/scikit-learn/issues/25258
Right. The metric is ill defined in this case 0/0. They suggest in the above issue to have a special flag for this, but they did solve this yet. Can...
Ok. So we should add a check, that if all the predictions are the same value (p), and all the references are the same value (r), we return 0 if...