eyeballer
eyeballer copied to clipboard
Add loss as an evaluation metric
Return and print the average loss against the evaluation set.
Perhaps even a loss distribution? Like, a loss histogram might be neat and helpful.
Can you provide more information on this issue? We already compute the hamming_loss
and report it as Overall Binary Accuracy
.
Is loss defined as the set of disjointed elements between the ground truth and predictions? If so, wouldn't a histogram just be binary true or false for each element in the set?
It's binary crossentropy. I think this scikit function should be it?
https://scikit-learn.org/stable/modules/generated/sklearn.metrics.log_loss.html
The multi-label loss is a bit of a lesser-used scenario, so maybe it's not the right one.