XSum
XSum copied to clipboard
Topic-Aware Convolutional Neural Networks for Extreme Summarization
Dataset
Thanks for your excellent works. Would you mind provide XSum dataset directly just like CNN/Dialy Mail that we are familiar with? I believe it may save time and be more...
thanks for your excellent work! when I run download-bbc-articles.py, it showed that  I want to konw why, thanks for your help~
python XSum-Topic-ConvS2S/generate.py D:\NLP\XSum\XSUM-EMNLP18-topic-convs2s\topic-convs2s-emnlp18\data-topic-convs2s --path D:\NLP\XSum\XSUM-EMNLP18-topic-convs2s\topic-convs2s-emnlp18\checkpoints-topic-convs2s\checkpoint_best.pt --batch-size 1 --beam 10 --replace-unk --source-lang document --target-lang summary --doctopics doc-topics --encoder-embed-dim 512 > test-output-topic-convs2s-checkpoint-best.pt Traceback (most recent call last): File "XSum-Topic-ConvS2S/generate.py", line 164, in...
Hello, it's a very good job! Then, did you or anyone else train the model on CNN/DM or Gigaword and get results? Wish for reply!!
Hey, I was trying to run Topic-ConvS2s' generation.py on only the test data. I preprocessed only the test data following the guideline in XSum_Dataset readme with the pretrained LDA model...
Hi, thanks for your work. When I ran this "python download-bbc-articles.py [--timestamp_exactness 14] ", I encountered this error: usage: download-bbc-articles.py [-h] [--request_parallelism REQUEST_PARALLELISM] [--context_token_limit CONTEXT_TOKEN_LIMIT] [--timestamp_exactness TIMESTAMP_EXACTNESS] download-bbc-articles.py: error: unrecognized...
Hello, Thank you very much for sharing this work and dataset. Currently, I am working on abstractive summarization and I wish to evaluate my model on XSum dataset. While I...
We just sent a mail to the mentioned mail address `[email protected]` to request the dataset for our University project. But a `Diagnostic-Code: smtp; 550 5.1.1 ... User unknown` failure mail...
Thank you for sharing your repo! I am running your code on CNN/Daily mail and experienced a similar issue that you encountered at [https://github.com/pytorch/fairseq/issues/118](https://github.com/pytorch/fairseq/issues/118). Could you let me know how...
Hi, thanks for providing the dataset as a download. I downloaded the dataset from the location mentioned in https://github.com/EdinburghNLP/XSum/issues/12#issuecomment-558241165 But it appears that the format of the dataset is different...