speech-emotion-recognition-using-self-attention icon indicating copy to clipboard operation
speech-emotion-recognition-using-self-attention copied to clipboard

about the results

Open LoganLiu66 opened this issue 4 years ago • 5 comments

Hello, I want to ask if you got the same results with mentioned in this paper, I try my best but can't get the same results. Attached file is some detail about my code. I want to know is it something wrong with my code.Thanks code.txt

LoganLiu66 avatar Mar 28 '20 07:03 LoganLiu66

Hello, I want to ask if you got the same results with mentioned in this paper, I try my best but can't get the same results. Attached file is some detail about my code. I want to know is it something wrong with my code.Thanks code.txt

Hi , That'r right we won't get the same accuracy. We won't even get close to what they are reporting. I wrote back to the authors and asked them for their code so that I can compare. But they are not agreed to give their code. There is no other way to validate these results unless they give their codes.

KrishnaDN avatar Mar 30 '20 08:03 KrishnaDN

Hi , That'r right we won't get the same accuracy. We won't even get close to what they are reporting. I wrote back to the authors and asked them for their code so that I can compare. But they are not agreed to give their code. There is no other way to validate these results unless they give their codes.

Hi, Krishna! Thanks for your sharing of code! What are the best WA and UA in your Implementation with and without multitask learning?

youcaiSUN avatar May 23 '20 16:05 youcaiSUN

Hi , That'r right we won't get the same accuracy. We won't even get close to what they are reporting. I wrote back to the authors and asked them for their code so that I can compare. But they are not agreed to give their code. There is no other way to validate these results unless they give their codes.

Hi, Krishna! Thanks for your sharing of code! What are the best WA and UA in your Implementation with and without multitask learning?

As of now we get around 56% WA and I don't remember how much we get for UA. As per the paper without multi-task learning we get about the same accuracy mentioned in the paper. According to the paper, we should get huge boost-up when we add multi-task learning and clearly it is not happening. I spoke to the original authors but no luck.

KrishnaDN avatar May 23 '20 16:05 KrishnaDN

hi, I cannot get the same results as the paper,too. And there is a overfitting. As the train WA is over 80%, but the test WA is about 50%. Do you have the same proplem?

jingyu-95 avatar Jul 18 '20 09:07 jingyu-95

hi, I cannot get the same results as the paper,too. And there is a overfitting. As the train WA is over 80%, but the test WA is about 50%. Do you have the same proplem?

Hi, Sorry for the delayed response. I have fixed some of the issues and added scheduled learning rate to reduce the learning rate based on the learning curve. There should not be any overfitting because number of layers and hidden units are exactly same as the paper. According my test, if you run 5 fold cross validation, then you should get ~54-55% average accuracy. My GPU servers are completely occupied a week or so. I will upload the pretrained models and loss curves soon as possible.

KrishnaDN avatar Dec 10 '20 08:12 KrishnaDN