XSum
XSum copied to clipboard
Just a comment (poor results) ...
... very interesting paper (http://aclweb.org/anthology/D18-1206 , with included examples), but I tried the online demo, http://cohort.inf.ed.ac.uk/xsum.html , with rather appalling results.
The textual sources for my tests were the abstract for one of my published papers, and the Google infobox for Nova Scotia.
Does the code for the online demo faithfully replicate the code in the paper?


Thanks for sharing your results. We have also different texts to play with our demo. Note that the demo uses a model which is trained on the XSum (BBC) dataset to generate an extreme summary (single sentence). The underlying model is not faithful to other datasets.
Ahh, thank you for the quick reply (appreciated). Would we be able to train on our own datasets, or (even better) use a pretrained language model (e.g.: fastText; ELMo; BERT) ... that would likely include relevant semantic information/embeddings?
Of course you can train these models on your own datasets. All the codes are available. It would be interesting to plug pretrained language model (e.g.: fastText; ELMo; BERT) with these architectures.
Excellent; thank you! :-)
@victoriastuart do you know how to train model use myself data ? and how to combine fasttext or bert to make the result better ? thank you
@shashiongithub how to make this project support Chinese language ? thank
I have not explored these options in my models. I do encourage you to do so.
@shashiongithub thank you