speech-emotion-recognition-using-self-attention icon indicating copy to clipboard operation
speech-emotion-recognition-using-self-attention copied to clipboard

the result about the code

Open jingyu-95 opened this issue 4 years ago • 6 comments

hi, I cannot get the same results as the paper,too. And there is a overfitting. As the train WA is over 80%, but the test WA is about 50%. Do you have the same proplem? finaly, I want ask your result about the WA and UA?

jingyu-95 avatar Jul 23 '20 08:07 jingyu-95

hi, I cannot get the same results as the paper,too. And there is a overfitting. As the train WA is over 80%, but the test WA is about 50%. Do you have the same proplem? finaly, I want ask your result about the WA and UA?

Hi Jingyu, I have also faced the same problem and I have wrote a mail to the authors about it. I even asked them for the code but due to patent work they said they can't give us their codes.

ok, how about the results without Multitask learning in this paper. Have you got the WA about 70.5% ? image

jingyu-95 avatar Jul 23 '20 09:07 jingyu-95

hi, I cannot get the same results as the paper,too. And there is a overfitting. As the train WA is over 80%, but the test WA is about 50%. Do you have the same proplem? finaly, I want ask your result about the WA and UA?

Hi Jingyu, I have also faced the same problem and I have wrote a mail to the authors about it. I even asked them for the code but due to patent work they said they can't give us their codes.

ok, how about the results without Multitask learning in this paper. Have you got the WA about 70.5% ? image

Without multitask you should get about 51% UA according to the paper. Their claim is that, the multi-task component is giving huge improvement. As per the authors mail, that is what they are saying. So , stand alone self attention is not a great add on here.

KrishnaDN avatar Jul 23 '20 09:07 KrishnaDN

Thank you for the relpy, krishnaDN. But the fact is the results didn't got a big improvemen even adding the multi-task learning.

jingyu-95 avatar Jul 23 '20 09:07 jingyu-95

Thank you for the relpy, krishnaDN. But the fact is the results didn't got a big improvemen even adding the multi-task learning.

That's what I am also surprised about. I tried asking them their code but due to various reasons they can't provide. I think you should talk to the authors once, things may become more clear

KrishnaDN avatar Jul 23 '20 09:07 KrishnaDN

Thank you for the relpy, krishnaDN. But the fact is the results didn't got a big improvemen even adding the multi-task learning.

That's what I am also surprised about. I tried asking them their code but due to various reasons they can't provide. I think you should talk to the authors once, things may become more clear

ok, I will write to the author. Hope we can find the answer.

jingyu-95 avatar Jul 23 '20 09:07 jingyu-95

Hi,@jingyu-95 ,did you get any answer or make progress? Could you please share your complete codes? Thanks.

zhaoxy0303 avatar Dec 23 '20 09:12 zhaoxy0303