Abi See
Abi See
We can't provide the data for legal reasons, but the README gives links to where you can download the [original data](http://cs.nyu.edu/~kcho/DMQA/) which is plaintext, and also a link to the...
That code is in [this repository](https://github.com/abisee/pointer-generator). See the function [`example_generator`](https://github.com/abisee/pointer-generator/blob/master/data.py#L108).
Have you looked at the output files themselves?
Yes, we used this code and the `single_pass` flag to get the results reported in the paper. The code we've released here is a cleaned-up version of the code we...
I'm not sure what's going on here. Your output seems pretty reasonable but not the kind of thing we'd expect to get such high ROUGE scores. I'd recommend the following:...
@raduk These ROUGE scores look like what we'd expect. Looks like you figured it out!
@makcbe If I understand your question correctly: yes you should be able to restore a non-coverage model to continue training with coverage=false.
@joy369 Congratulations, looks like you've got some fairly reasonable results! Remember that ROUGE scores are not a perfect measure of quality (see the discussion in section 7.1 of the [paper](https://arxiv.org/pdf/1704.04368.pdf))...
@StevenLOL I see this happen sometimes too -- seems to be a [very common problem](https://www.google.com/search?q=tensorflow+nan+loss+during+training&oq=tensorflow+nan&aqs=chrome.1.69i57j0l5.3256j1j7&sourceid=chrome&ie=UTF-8) with Tensorflow training in general.
@apoorv001 probably not. This is where the concurrent `eval` job is useful: it [saves](https://github.com/abisee/pointer-generator/blob/master/run_summarization.py#L182) the 3 best checkpoints (according to dev set) at any time. So in theory it should...