seq2seq-summarizer
seq2seq-summarizer copied to clipboard
Pointer-generator reinforced seq2seq summarization in PyTorch
This code uses batch-averaged (sample_rouge - baseline rouge), but it don't make sense in math and this item should be sample-wise because what we really want to maximize is this:
Hello, @ymfa I am now training this model. Could you tell me what is the ideal rouge score? Thank you.
I run the code in GPU server,but when run the rouge to value the result ,the error come out: /bin/sh: 1: ./ROUGE-1.5.5.pl: Permission denied  Do you know what happen?...
sorry, but how can I get data/.vector_cache/glove.6B.100d.txt?
i am new in pythin & nlp, so i am not sure how to solve this problem. i will appreciate you if you can tell me how to solve that...
Hello, i have a trouble. when i train the model by using the data of google. the trainn loss is decrease, but the valid loss is increase at about 6....
i want to ask about the end_token of the generating summarization. for example. extras needed for film that starts island in the island ; ' for rise again '' film...
The author of pointer-generator propose a method called coverage mechanism. Coverage vector, which is the sum of attention distributions over all previous decoder timesteps, but the coverage vector in this...
Good job. How about rouge scores?