PLM_annotator
PLM_annotator copied to clipboard
Codes for our ACL21 paper: Language Model as an Annotator: Exploring DialoGPT for Dialogue Summarization
Results
2
PLM_annotator issues
Sort by
recently updated
recently updated
newest added
两个问题: 1. 用`python get_loss.py -d ami` 处理速度慢; > 每个样例数据需要处理大约1~2min,请问是否为正常速度? 2. 第二步中`python recover_word_loss.py -d [自己数据集]`报错, `Load train_loss.json finished, Data size:100 Load valid_loss.json finished, Data size:30 Load test_loss.json finished, Data size:30 Traceback...
Hello. Could you please tell the rouge score in your paper is r, p or f? And for bart baseline , did you do any preprocessing like deleting `\n\r` or...