xfspell icon indicating copy to clipboard operation
xfspell copied to clipboard

RuntimeError on README example

Open stefan-jansen opened this issue 4 years ago • 3 comments

Thank you for sharing the model and the blog post, looks very interesting. Unfortunately, when running the example in a Python 3.7.5 virtual environment with installed requirements, I'm getting the following error:

echo "tisimptant too spll chck ths dcment." \
>     | python src/tokenize.py \
>     | fairseq-interactive model7m/ \
>     --path model7m/checkpoint_best.pt \
>     --source-lang fr --target-lang en --beam 10 \
>    | python src/format_fairseq_output.py
Traceback (most recent call last):
  File "/home/stefan/.pyenv/versions/xfspell/bin/fairseq-interactive", line 11, in <module>
    load_entry_point('fairseq==0.9.0', 'console_scripts', 'fairseq-interactive')()
  File "/home/stefan/.pyenv/versions/3.7.8/envs/xfspell/lib/python3.7/site-packages/fairseq_cli/interactive.py", line 190, in cli_main
    main(args)
  File "/home/stefan/.pyenv/versions/3.7.8/envs/xfspell/lib/python3.7/site-packages/fairseq_cli/interactive.py", line 149, in main
    translations = task.inference_step(generator, models, sample)
  File "/home/stefan/.pyenv/versions/3.7.8/envs/xfspell/lib/python3.7/site-packages/fairseq/tasks/fairseq_task.py", line 265, in inference_step
    return generator.generate(models, sample, prefix_tokens=prefix_tokens)
  File "/home/stefan/.pyenv/versions/3.7.8/envs/xfspell/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
    return func(*args, **kwargs)
  File "/home/stefan/.pyenv/versions/3.7.8/envs/xfspell/lib/python3.7/site-packages/fairseq/sequence_generator.py", line 113, in generate
    return self._generate(model, sample, **kwargs)
  File "/home/stefan/.pyenv/versions/3.7.8/envs/xfspell/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
    return func(*args, **kwargs)
  File "/home/stefan/.pyenv/versions/3.7.8/envs/xfspell/lib/python3.7/site-packages/fairseq/sequence_generator.py", line 379, in _generate
    scores.view(bsz, beam_size, -1)[:, :, :step],
  File "/home/stefan/.pyenv/versions/3.7.8/envs/xfspell/lib/python3.7/site-packages/fairseq/search.py", line 81, in step
    torch.div(self.indices_buf, vocab_size, out=self.beams_buf)
RuntimeError: Integer division of tensors using div or / is no longer supported, and in a future release div will perform true division as in Python 3. Use true_divide or floor_divide (// in Python) instead.

Here's the result of pip freeze:

aspell-python-py3==1.15
cffi==1.14.2
Cython==0.29.21
editdistance==0.5.3
fairseq==0.9.0
future==0.18.2
numpy==1.19.1
portalocker==2.0.0
pycparser==2.20
regex==2020.7.14
sacrebleu==1.4.13
torch==1.6.0
tqdm==4.48.2

Any guidance appreciated!

stefan-jansen avatar Aug 28 '20 00:08 stefan-jansen

According to the issue in Fairseq, you can update the code of search.py from 'torch.div(self.indices_buf, vocab_size, out=self.beams_buf)' to 'torch.floor_divide(self.indices_buf, vocab_size, out=self.beams_buf)' https://github.com/pytorch/fairseq/issues/2460

aacs0130 avatar Sep 28 '20 13:09 aacs0130

I was able to run the code by upgrading fairseq to the latest version. 0.9 did not work for me.

Omarnabk avatar Dec 14 '20 22:12 Omarnabk

Similar issue for me:

$ echo "The book Tom and Jerry put on the yellow desk yesterday war about NLP." | python src/tokenize.py | fairseq-interactive model7m/ --path model7m/checkpoint_best.pt --source-lang fr --target-lang en | python src/format_fairseq_output.py
Traceback (most recent call last):
  File "/home/dsorge/anaconda3/envs/spellcheck/bin/fairseq-interactive", line 8, in <module>
    sys.exit(cli_main())
  File "/home/dsorge/anaconda3/envs/spellcheck/lib/python3.9/site-packages/fairseq_cli/interactive.py", line 190, in cli_main
    main(args)
  File "/home/dsorge/anaconda3/envs/spellcheck/lib/python3.9/site-packages/fairseq_cli/interactive.py", line 149, in main
    translations = task.inference_step(generator, models, sample)
  File "/home/dsorge/anaconda3/envs/spellcheck/lib/python3.9/site-packages/fairseq/tasks/fairseq_task.py", line 265, in inference_step
    return generator.generate(models, sample, prefix_tokens=prefix_tokens)
  File "/home/dsorge/anaconda3/envs/spellcheck/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/dsorge/anaconda3/envs/spellcheck/lib/python3.9/site-packages/fairseq/sequence_generator.py", line 113, in generate
    return self._generate(model, sample, **kwargs)
  File "/home/dsorge/anaconda3/envs/spellcheck/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/dsorge/anaconda3/envs/spellcheck/lib/python3.9/site-packages/fairseq/sequence_generator.py", line 376, in _generate
    cand_scores, cand_indices, cand_beams = self.search.step(
  File "/home/dsorge/anaconda3/envs/spellcheck/lib/python3.9/site-packages/fairseq/search.py", line 81, in step
    torch.div(self.indices_buf, vocab_size, out=self.beams_buf)
RuntimeError: result type Float can't be cast to the desired output type Long

Edit: The solution by @aacs0130 worked for me! Thanks for your help, Cecilia!

DavidSorge avatar Mar 05 '21 03:03 DavidSorge