eyeballer icon indicating copy to clipboard operation
eyeballer copied to clipboard

Add loss as an evaluation metric

Open dan-bishopfox opened this issue 5 years ago • 3 comments

Return and print the average loss against the evaluation set.

dan-bishopfox avatar Apr 16 '19 18:04 dan-bishopfox

Perhaps even a loss distribution? Like, a loss histogram might be neat and helpful.

dan-bishopfox avatar Apr 16 '19 19:04 dan-bishopfox

Can you provide more information on this issue? We already compute the hamming_loss and report it as Overall Binary Accuracy.

Is loss defined as the set of disjointed elements between the ground truth and predictions? If so, wouldn't a histogram just be binary true or false for each element in the set?

the-bumble avatar May 20 '19 20:05 the-bumble

It's binary crossentropy. I think this scikit function should be it?

https://scikit-learn.org/stable/modules/generated/sklearn.metrics.log_loss.html

The multi-label loss is a bit of a lesser-used scenario, so maybe it's not the right one.

dan-bishopfox avatar May 20 '19 22:05 dan-bishopfox