fast-bert icon indicating copy to clipboard operation
fast-bert copied to clipboard

Evaluation metrics per class for multilabel sequence classification

Open ehsan-soe opened this issue 5 years ago • 2 comments

Hi,

Does the repo support reporting evaluation metrics per class for multilabel sequence classification?

Thanks

ehsan-soe avatar Dec 03 '19 22:12 ehsan-soe

same ask here!

tbs17 avatar Apr 08 '20 01:04 tbs17

Here are the results that I got for toxic comments dataset using fast_bert: precision recall f1-score support

    toxic       0.50      0.93      0.65      6090
    severe_toxic       0.40      0.50      0.45       367
    obscene       0.61      0.84      0.71      3691
    threat       0.00      0.00      0.00       211
    insult       0.59      0.78      0.67      3427
    identity_hate       0.79      0.38      0.52       712

micro avg       0.55      0.82      0.66     14498
macro avg       0.48      0.57      0.50     14498
weighted avg       0.55      0.82      0.65     14498
samples avg       0.08      0.08      0.08     14498

Test F1 Accuracy: 0.6569613259668508 Test Flat Accuracy: 0.859779924348995

{'roc_auc': {'toxic': 0.747083913926278, 'severe_toxic': 0.7007573790439838, 'obscene': 0.79970607034752, 'threat': nan, 'insult': 0.7877764952425138, 'identity_hate': 0.8918915659392267}}

@kaushaltrivedi Can you please comment on it?

rpoli40 avatar Jan 15 '21 17:01 rpoli40