LoRA
LoRA copied to clipboard
Clarifying questions about the paper
Hi, Thank you for sharing the source code. I really enjoy the work you propose.
While reading the paper and reproducing the results I got a couple of questions:
- In the Table 3 the row with the results for GPT2-M AdapterH written without asterix. But I couldn't find any source code that implements GPT2-M with AdapterH. Is this a typo?
- Regarding the computation of METEOR for WebNLG and DART datasets. I can't reproduce the result of this metric with the script you proposed from GenerationEval repo. I wrote my own script that evaluates WebNLG and DART using HuggingFace
evaluate
library and got same (very close) results. So, how did you obtain such METEOR score?