Atul Kumar

Results 21 comments of Atul Kumar

You can try this and let me know if you find any bug. https://github.com/atulkum/pointer_summarizer

did you turn on the coverage loss?

You can compare the rouge score too. I used 1070 with 8 gb and it took 3 days to train for 500k iteration. On 1080 ti it must be faster.

Thats great. One more option would be to train for 700k make checkpoint every 50k and verify which checkpoint give best result.

You don't need to upload the model. You can just report the rouge score.

Thanks for doing this. Did you enabled coverage loss for this result?

On my RTX 1070 it took around 3 days. google Collab might be faster.

You need to change code here https://github.com/atulkum/pointer_summarizer/blob/master/data_util/data.py#L35 and here https://github.com/atulkum/pointer_summarizer/blob/master/training_ptr_gen/model.py#L45 you might need something like ``` word_emb_matrix = get_word_embd(vocab, config) embd_vector = torch.from_numpy(word_emb_matrix).float() self.word_embeds = nn.Embedding.from_pretrained(embd_vector, freeze=False) ``` This link...

Thanks for reviewing the code. You are correct I fixed the code. Would update the result after the re run.

I think you are right. The order of updating the coverage is not correct. The coverage should be updated after coverage loss has been calculated. I think I might have...