Self_Explaining_Structures_Improve_NLP_Models icon indicating copy to clipboard operation
Self_Explaining_Structures_Improve_NLP_Models copied to clipboard

Baseline results without Self-Explaining

Open Munzu opened this issue 2 years ago • 1 comments

In your paper, you reported the following results on SST-5 for RoBERTa without Self-Explaining as a baseline:

Model Accuracy
RoBERTa-base 56.4%
RoBERTa-large 57.9%

The original paper by Liu et al. (2019b) does not list any results for SST-5, so I'm assuming you obtained these results yourself. Could you share how you did that? Did you fine-tune these baselines on the SST-5 dataset, or are these the performances right out of the box? Many thanks in advance.

Munzu avatar Jun 03 '22 19:06 Munzu

I have faced the same problem. I use the huggingface official script to finetune Roberta-base. However, I can hardly achieve such results. Have you reproduced the baseline and would you mind sharing the configuration?

ZeqiangWangAI avatar Jul 31 '22 04:07 ZeqiangWangAI