XSum icon indicating copy to clipboard operation
XSum copied to clipboard

Some bugs in generation phase

Open czyssrs opened this issue 5 years ago • 3 comments

Hi,

Thanks for your excellent work.

I downloaded the data and processed them as in the instructions (few of them cannot be downloaded and I modified the data processing part to skip on these ones, I guess should not be a big issue. ) Then I trained a new model and everything looks good. However, when I run with the generation script, it gives me the following errors:

The command: CUDA_VISIBLE_DEVICES=1 python XSum-ConvS2S/generate.py ./data-convs2s --path ./checkpoints-convs2s/checkpoint-best.pt --batch-size 1 --beam 10 --replace-unk --source-lang document --target-lang summary > test-output-convs2s-checkpoint-best.pt

Output: 0%| | 0/11334 [00:00<?, ?it/s]/home/rasmlnlp/zhiyu/anaconda3/envs/tf1.12/lib/python3.5/site-packages/torch/autograd/function.py:41: UserWarning: mark_shared_storage is deprecated. Tensors with shared storages are automatically tracked. Note that calls to set_() are not tracked 'mark_shared_storage is deprecated. ' Traceback (most recent call last): File "XSum-ConvS2S/generate.py", line 161, in main(args) File "XSum-ConvS2S/generate.py", line 96, in main for sample_id, src_tokens, target_tokens, hypos in translations: File "/home/rasmlnlp/zhiyu/XSum/XSum-ConvS2S/fairseq/sequence_generator.py", line 77, in generate_batched_itr prefix_tokens=s['target'][:, :prefix_size] if prefix_size > 0 else None, File "/home/rasmlnlp/zhiyu/XSum/XSum-ConvS2S/fairseq/sequence_generator.py", line 90, in generate return self._generate(src_tokens, src_lengths, beam_size, maxlen, prefix_tokens) File "/home/rasmlnlp/zhiyu/XSum/XSum-ConvS2S/fairseq/sequence_generator.py", line 250, in _generate tokens[:, :step+1], encoder_outs, incremental_states) File "/home/rasmlnlp/zhiyu/XSum/XSum-ConvS2S/fairseq/sequence_generator.py", line 413, in _decode decoder_out, attn = model.decoder(tokens, encoder_out, incremental_states[model]) File "/home/rasmlnlp/zhiyu/anaconda3/envs/tf1.12/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/home/rasmlnlp/zhiyu/XSum/XSum-ConvS2S/fairseq/models/fconv.py", line 266, in forward x, attn_scores = attention(x, target_embedding, (encoder_a, encoder_b)) File "/home/rasmlnlp/zhiyu/anaconda3/envs/tf1.12/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/home/rasmlnlp/zhiyu/XSum/XSum-ConvS2S/fairseq/models/fconv.py", line 160, in forward x = self.bmm(x, encoder_out[0]) File "/home/rasmlnlp/zhiyu/anaconda3/envs/tf1.12/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/home/rasmlnlp/zhiyu/XSum/XSum-ConvS2S/fairseq/modules/beamable_mm.py", line 34, in forward input1 = input1[:, 0, :].unfold(0, beam, beam).transpose(2, 1) RuntimeError: invalid argument 3: out of range at /opt/conda/conda-bld/pytorch_1535490206202/work/aten/src/THC/generic/THCTensor.cpp:444

Is there anything bugged here? Or is there any other way to reproduce the results in the paper? (The training only shows the loss or ppl on the validation set, not ROUGE results.)

Thank you.

czyssrs avatar Jul 12 '19 20:07 czyssrs

Sorry for the delay. Could you please first use the original data and verify if you still get the error? http://kinloch.inf.ed.ac.uk/public/XSUM-EMNLP18-Summary-Data-Original.tar.gz

shashiongithub avatar Aug 21 '19 10:08 shashiongithub

I get the same error and I'm using the original data.

somaia02 avatar Nov 18 '19 14:11 somaia02

Hi,

Thanks for your excellent work.

I downloaded the data and processed them as in the instructions (few of them cannot be downloaded and I modified the data processing part to skip on these ones, I guess should not be a big issue. ) Then I trained a new model and everything looks good. However, when I run with the generation script, it gives me the following errors:

The command: CUDA_VISIBLE_DEVICES=1 python XSum-ConvS2S/generate.py ./data-convs2s --path ./checkpoints-convs2s/checkpoint-best.pt --batch-size 1 --beam 10 --replace-unk --source-lang document --target-lang summary > test-output-convs2s-checkpoint-best.pt

Output: 0%| | 0/11334 [00:00<?, ?it/s]/home/rasmlnlp/zhiyu/anaconda3/envs/tf1.12/lib/python3.5/site-packages/torch/autograd/function.py:41: UserWarning: mark_shared_storage is deprecated. Tensors with shared storages are automatically tracked. Note that calls to set_() are not tracked 'mark_shared_storage is deprecated. ' Traceback (most recent call last): File "XSum-ConvS2S/generate.py", line 161, in main(args) File "XSum-ConvS2S/generate.py", line 96, in main for sample_id, src_tokens, target_tokens, hypos in translations: File "/home/rasmlnlp/zhiyu/XSum/XSum-ConvS2S/fairseq/sequence_generator.py", line 77, in generate_batched_itr prefix_tokens=s['target'][:, :prefix_size] if prefix_size > 0 else None, File "/home/rasmlnlp/zhiyu/XSum/XSum-ConvS2S/fairseq/sequence_generator.py", line 90, in generate return self._generate(src_tokens, src_lengths, beam_size, maxlen, prefix_tokens) File "/home/rasmlnlp/zhiyu/XSum/XSum-ConvS2S/fairseq/sequence_generator.py", line 250, in _generate tokens[:, :step+1], encoder_outs, incremental_states) File "/home/rasmlnlp/zhiyu/XSum/XSum-ConvS2S/fairseq/sequence_generator.py", line 413, in _decode decoder_out, attn = model.decoder(tokens, encoder_out, incremental_states[model]) File "/home/rasmlnlp/zhiyu/anaconda3/envs/tf1.12/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/home/rasmlnlp/zhiyu/XSum/XSum-ConvS2S/fairseq/models/fconv.py", line 266, in forward x, attn_scores = attention(x, target_embedding, (encoder_a, encoder_b)) File "/home/rasmlnlp/zhiyu/anaconda3/envs/tf1.12/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/home/rasmlnlp/zhiyu/XSum/XSum-ConvS2S/fairseq/models/fconv.py", line 160, in forward x = self.bmm(x, encoder_out[0]) File "/home/rasmlnlp/zhiyu/anaconda3/envs/tf1.12/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/home/rasmlnlp/zhiyu/XSum/XSum-ConvS2S/fairseq/modules/beamable_mm.py", line 34, in forward input1 = input1[:, 0, :].unfold(0, beam, beam).transpose(2, 1) RuntimeError: invalid argument 3: out of range at /opt/conda/conda-bld/pytorch_1535490206202/work/aten/src/THC/generic/THCTensor.cpp:444

Is there anything bugged here? Or is there any other way to reproduce the results in the paper? (The training only shows the loss or ppl on the validation set, not ROUGE results.)

Thank you.

Hi,

Could you tell me how you modified the data processing part to skip the missing files

marythomaa98 avatar Dec 02 '19 09:12 marythomaa98