CMU-MultimodalSDK
CMU-MultimodalSDK copied to clipboard
Classwise Accuracy calculations (WA) for MOSEI emotions
@A2Zadeh , I am trying to understand how accuracies are calculated for MOSEI emotions as it is multi class and multi label emotions. Do we need to use Sigmoid at the output layer with 6 output units so that it can provide predictions for class/multi label? Another question - How to use weighted accuracy function defined at below link for class wise accuracy?. Do we need to provide a class specific predictions and ground truths to this classification function and weighted accuracy function. Repeat the same process for all 6 emotion classes. Also, how do we calculate F1 score
Link:https://github.com/yaohungt/Multimodal-Transformer/blob/master/src/eval_metrics.py.
Thanks in advance.
Hi @A2Zadeh, from your reply in below issue https://github.com/A2Zadeh/CMU-MultimodalSDK/issues/220#issuecomment-668958297.
Does it mean that we need to train individual binary classification model for each emotion to have results inline with Graph MFN in MOSEI base paper. Thanks in advance.
Your reply to above issue for quick reference: Hi tungnt55. The dataset actually has 4 classes, but we do binary in the experiments with Graph-MFN. The criteria for the emotion is >0 being equal to present and =0 being equal to absent.
Hi @AnilRahate,
Yes exactly! <== to both your questions.
Let me know if there were further questions.
Thanks a lot response @A2Zadeh , I would like threshold value being used on prediction to classify it as +ve/-ve. Can we use standard 0.5 or you have specific guidance on that.
@AnilRahate Hi, your work is excellent, but I have a question about how you handle the labels for CMU-MOSEI, as I am currently working on a six-classification task for audio and I urgently need the labels for the audio segments, I am hoping you can help me. Thanks in advance!