fairseq icon indicating copy to clipboard operation
fairseq copied to clipboard

SyntaxError in wav2vec model

Open boomb0om opened this issue 1 year ago • 1 comments

🐛 Bug

Hi, I receive a SyntaxError: invalid syntax for file fairseq/models/wav2vec/wav2vec2_classification.py I think it's because of the = sign

https://github.com/facebookresearch/fairseq/blob/af12c9c6407bbcf2bca0b2f1923cf78f3db8857c/fairseq/models/wav2vec/wav2vec2_classification.py#L209

To Reproduce

Steps to reproduce the behavior:

  1. Run the code below in jupyter notebook:
import torch
wmt_translator = torch.hub.load(
    'pytorch/fairseq', 
    'transformer.wmt19.ru-en', 
    checkpoint_file='model1.pt:model2.pt:model3.pt:model4.pt',
    tokenizer='moses', 
    bpe='fastbpe'
)
  1. See error
Traceback (most recent call last):
  File "import_and_download.py", line 10, in <module>
    bpe='fastbpe'
  File "/home/user/conda/lib/python3.7/site-packages/torch/hub.py", line 404, in load
    model = _load_local(repo_or_dir, model, *args, **kwargs)
  File "/home/user/conda/lib/python3.7/site-packages/torch/hub.py", line 430, in _load_local
    hub_module = _import_module(MODULE_HUBCONF, hubconf_path)
  File "/home/user/conda/lib/python3.7/site-packages/torch/hub.py", line 76, in _import_module
    spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 728, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/home/jovyan/.cache/torch/hub/pytorch_fairseq_main/hubconf.py", line 39, in <module>
    from fairseq.hub_utils import (  # noqa; noqa
  File "/home/jovyan/.cache/torch/hub/pytorch_fairseq_main/fairseq/__init__.py", line 33, in <module>
    import fairseq.criterions  # noqa
  File "/home/jovyan/.cache/torch/hub/pytorch_fairseq_main/fairseq/criterions/__init__.py", line 36, in <module>
    importlib.import_module("fairseq.criterions." + file_name)
  File "/home/user/conda/lib/python3.7/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "/home/jovyan/.cache/torch/hub/pytorch_fairseq_main/fairseq/criterions/ctc.py", line 21, in <module>
    from fairseq.tasks import FairseqTask
  File "/home/jovyan/.cache/torch/hub/pytorch_fairseq_main/fairseq/tasks/__init__.py", line 138, in <module>
    import_tasks(tasks_dir, "fairseq.tasks")
  File "/home/jovyan/.cache/torch/hub/pytorch_fairseq_main/fairseq/tasks/__init__.py", line 119, in import_tasks
    importlib.import_module(namespace + "." + task_name)
  File "/home/user/conda/lib/python3.7/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "/home/jovyan/.cache/torch/hub/pytorch_fairseq_main/fairseq/tasks/multilingual_translation.py", line 21, in <module>
    from fairseq.models import FairseqMultiModel
  File "/home/jovyan/.cache/torch/hub/pytorch_fairseq_main/fairseq/models/__init__.py", line 236, in <module>
    import_models(models_dir, "fairseq.models")
  File "/home/jovyan/.cache/torch/hub/pytorch_fairseq_main/fairseq/models/__init__.py", line 218, in import_models
    importlib.import_module(namespace + "." + model_name)
  File "/home/user/conda/lib/python3.7/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "/home/jovyan/.cache/torch/hub/pytorch_fairseq_main/fairseq/models/hubert/__init__.py", line 6, in <module>
    from .hubert import *  # noqa
  File "/home/jovyan/.cache/torch/hub/pytorch_fairseq_main/fairseq/models/hubert/hubert.py", line 20, in <module>
    from fairseq.models.wav2vec.wav2vec2 import (
  File "/home/jovyan/.cache/torch/hub/pytorch_fairseq_main/fairseq/models/wav2vec/__init__.py", line 10, in <module>
    from .wav2vec2_classification import * # noqa
  File "<fstring>", line 1
    (self.latent_embed_dim=)
                          ^
SyntaxError: invalid syntax

Code sample

import torch
wmt_translator = torch.hub.load(
    'pytorch/fairseq', 
    'transformer.wmt19.ru-en', 
    checkpoint_file='model1.pt:model2.pt:model3.pt:model4.pt',
    tokenizer='moses', 
    bpe='fastbpe'
)

Expected behavior

Successfull import and model initialization

Environment

  • fairseq Version (e.g., 1.0 or main): main
  • PyTorch Version (e.g., 1.0): 1.11.0
  • OS (e.g., Linux): Ubuntu
  • How you installed fairseq (pip, source): pip
  • Build command you used (if compiling from source):
  • Python version: 3.7
  • CUDA/cuDNN version: 11.5
  • GPU models and configuration: Nvidia A100 80GB
  • Any other relevant information:

boomb0om avatar May 23 '23 17:05 boomb0om