OR-NMT icon indicating copy to clipboard operation
OR-NMT copied to clipboard

RuntimeError: masked_select(): self and result must have the same scalar type

Open nomadlx opened this issue 4 years ago • 3 comments

This error occurs after saving the model. And my parameters are configured as follows:

python train.py ${data_dir} \$
      --clip-norm 0.1 \$
      --dropout 0.1 \$
      --max-tokens ${max_tokens} \$
      --seed ${seed} \$
      --num-workers 8 \$
      --source-lang ${src_lng} \$
      --target-lang ${tgt_lng} \$
      --optimizer adafactor \$
      --criterion label_smoothed_cross_entropy \$
      --label-smoothing 0.1 \$
      --weight-decay 0.0 \$
      --lr 0.0007 \$
      --lr-scheduler inverse_sqrt \$
      --warmup-init-lr 1e-07 \$
      --warmup-updates 4000 \$
      --task translation \$
      --arch oracle_transformer_vaswani_wmt_en_de_big \$
      --use-sentence-level-oracles --decay-k 5800 --use-bleu-gumbel-noise --gumbel-noise 0.5 --oracle-search-beam-size 4 \$
      --fp16 \$
      --save-dir ${model_dir} \$
      --log-interval 100 \$
      --save-interval-updates ${save_interval} \$
      --me ${max_epoch} \$
      --memory-efficient-fp16 

nomadlx avatar Nov 06 '20 09:11 nomadlx

The error traceback:

File "/home/exp_or_nmt/code/OR-NMT/OR-Transformer/fairseq/trainer.py", line 380, in train_step
    raise e
  File "/home/exp_or_nmt/code/OR-NMT/OR-Transformer/fairseq/trainer.py", line 358, in train_step
    ignore_grad=is_dummy_batch,
  File "/home/exp_or_nmt/code/OR-NMT/OR-Transformer/fairseq/tasks/fairseq_task.py", line 337, in train_step
    loss, sample_size, logging_output = criterion(model, sample)
  File "/home/exp_or_nmt/code/OR-NMT/venv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/exp_or_nmt/code/OR-NMT/OR-Transformer/fairseq/criterions/label_smoothed_cross_entropy.py", line 56, in forward
    net_output = model(**sample['net_input'])
  File "/home/exp_or_nmt/code/OR-NMT/venv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/exp_or_nmt/code/OR-NMT/venv/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 619, in forward
    output = self.module(*inputs[0], **kwargs[0])
  File "/home/exp_or_nmt/code/OR-NMT/venv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/exp_or_nmt/code/OR-NMT/OR-Transformer/fairseq/models/oracle_transformer.py", line 346, in forward
    prev_output_tokens, src_tokens, src_lengths, target)
  File "/home/exp_or_nmt/code/OR-NMT/venv/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
    return func(*args, **kwargs)
  File "/home/exp_or_nmt/code/OR-NMT/OR-Transformer/fairseq/models/oracle_transformer.py", line 303, in get_sentence_oracle_tokens
    out = self.generator.generate([self], sample, target, noise=noise)
  File "/home/exp_or_nmt/code/OR-NMT/venv/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
    return func(*args, **kwargs)
  File "/home/exp_or_nmt/code/OR-NMT/OR-Transformer/fairseq/sequence_generator.py", line 92, in generate
    return self._generate(model, sample, target=target, noise=noise, **kwargs)
  File "/home/exp_or_nmt/code/OR-NMT/venv/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
    return func(*args, **kwargs)
  File "/home/exp_or_nmt/code/OR-NMT/OR-Transformer/fairseq/sequence_generator.py", line 393, in _generate
    out=eos_bbsz_idx,
RuntimeError: masked_select(): self and result must have the same scalar type

nomadlx avatar Nov 06 '20 09:11 nomadlx

The BUG seems to have been caused here: https://github.com/ictnlp/OR-NMT/blob/239e05e48c2ed4748de01e8909f919e836d821db/OR-Transformer/fairseq/search.py#L78 right is:

beams_buf = indices_buf // vocab_size

This bug was fixed, but another bug popped up...

File "/home/exp_or_nmt/code/OR-NMT/OR-Transformer/fairseq/models/oracle_transformer.py", line 304, in get_sentence_oracle_tokens
    sentence_oracle_inputs = torch.ones_like(target)
TypeError: ones_like(): argument 'input' (position 1) must be Tensor, not NoneType

Is it work well when you run it?

nomadlx avatar Nov 06 '20 10:11 nomadlx

The BUG seems to have been caused here:

https://github.com/ictnlp/OR-NMT/blob/239e05e48c2ed4748de01e8909f919e836d821db/OR-Transformer/fairseq/search.py#L78

right is:

beams_buf = indices_buf // vocab_size

This bug was fixed, but another bug popped up...

File "/home/exp_or_nmt/code/OR-NMT/OR-Transformer/fairseq/models/oracle_transformer.py", line 304, in get_sentence_oracle_tokens
    sentence_oracle_inputs = torch.ones_like(target)
TypeError: ones_like(): argument 'input' (position 1) must be Tensor, not NoneType

Is it work well when you run it?

Thank you so much! This solution works, but the another bug didn't pop out in my BART inference procedure.

zhuochunli avatar Feb 07 '23 21:02 zhuochunli