qags
qags copied to clipboard
How shall I create the file "test.src"?
How shall I create the file "test.src" at question generation step? And does it contain?
@mriganktiwari I guess i t's from the part preparing (tokenize and binarize) the inputs for QG model that corresponds to P(Q|Y). So I thought ~~this should be test.tgt of CNN/DM dataset (and thus doing so)~~
Correct me if I'm wrong =]
@mriganktiwari I found that I need to tokenize and binarize test_w_10ans.txt instead of original test-tgt split= test.txt.tgt.tagged to make up 100 questions per test split sample (that is doc-summary pair).
I guess the annotation of P(Q|Y) was a good idea to avoid misleading the readers of what QAGS measuring, but, to reproduce, this leaves a question mark. Shouldn't it be P(Q|Y;A) ? @W4ngatang
How shall I create the file "test.src" at question generation step? And does it contain?
Have you figured out it? When reproducing this work, I was also confused by this problem
@mriganktiwari I found that I need to tokenize and binarize
test_w_10ans.txtinstead of original test-tgt split=test.txt.tgt.taggedto make up 100 questions per test split sample (that is doc-summary pair).I guess the annotation of P(Q|Y) was a good idea to avoid misleading the readers of what QAGS measuring, but, to reproduce, this leaves a question mark. Shouldn't it be P(Q|Y;A) ? @W4ngatang
Do you mean that the test_w_10ans.txt is the only file that need to be tokenized?
Any new information on this one? I also dont know how to generate this file
@mriganktiwari I found that I need to tokenize and binarize
test_w_10ans.txtinstead of original test-tgt split=test.txt.tgt.taggedto make up 100 questions per test split sample (that is doc-summary pair).I guess the annotation of P(Q|Y) was a good idea to avoid misleading the readers of what QAGS measuring, but, to reproduce, this leaves a question mark. Shouldn't it be P(Q|Y;A) ? @W4ngatang
I don't really remember the details of the code but I succeeded reproducing it after writing the comment now I'm referring to. Feel sorry that I cannot share actual code (which is lost) I've run for the experiment. But if you want to reproduce it, the original author's code is worth reading if you already read the paper. It was a good starting point for me and took not too long to fill the gap. If you replace the generation models in this work with recent language models, it will definitely work better. IMHO, if I need to revisit this work, I wouldn't bother myself to train small models as the original work did but just adapt the instruct-tuned LLMs instead with some good instructions. @dlaredo @Zhou-Zoey
Hope my mail with the author might help you to reproduce it or in some other way.