MatchSum
MatchSum copied to clipboard
Code for ACL 2020 paper: "Extractive Summarization as Text Matching"
Testing process of MatchSum !!! Start loading datasets !!! Finished in 0:00:03.919354 Information of dataset is: In total 1 datasets: test has 11489 instances. Current model is MatchSum_cnndm_bert.ckpt /usr/local/lib/python3.7/dist-packages/torch/serialization.py:593: SourceChangeWarning:...
Hi, i'm trying to access the summaries and the reference texts but the files are .dec and .ref and i don't understand how to open them. Can someone help me...
Hi, Thank you for your awesome work. Can you give more detailed instructions on personal data preparation? more specifically convert a text file to a jsonl file with the fields...
input fields after batch(if batch size is 2): candidate_id: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2, 20, 90]) text_id: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2, 451]) summary_id: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2, 45]) There is no target field.
Hi, If I wish to find the most important sentences using the BertExt model prior to generating summaries, what's the best way to do it? For example, can I use...
I see the implementation in 'get_candidate.py': First, we should select **5** most important sentences from BertExt; Then select any 2 or 3 sentences to form a candidate summary. There are...
Hi, thanks so much for sharing your code and model! I downloaded the pretrained model you provided, but ran into this error. Do you know how I can solve this...
Hi, in your paper you state that you use the same learning rate schedule as in the paper "attention is all you need". But I cannot find any implementation of...
Thanks for publishing your code to public. I wonder how you obtain the " label " in test_CNNDM_roberta.jsonl. Do you use the greedy selction algorithm mentioned in SumRunner or use...
Hi, First of all thank you for your work and for sharing the code. I was wondering if there are instructions to train on a custom data collection (adapting data...