fairseq icon indicating copy to clipboard operation
fairseq copied to clipboard

Runtime Error

Open merethebest opened this issue 2 years ago • 0 comments

❓ Questions and Help

i have given the input_feature_per_channel 80 but it gets the following help what can you help us

Before asking:

  1. search the issues.
  2. search the docs. 2023-05-31 21:00:38 | INFO | fairseq_cli.train | Start iterating over samples Traceback (most recent call last): File "/content/fairseq/train.py", line 14, in cli_main() File "/content/fairseq/fairseq_cli/train.py", line 574, in cli_main distributed_utils.call_main(cfg, main) File "/content/fairseq/fairseq/distributed/utils.py", line 404, in call_main main(cfg, **kwargs) File "/content/fairseq/fairseq_cli/train.py", line 205, in main valid_losses, should_stop = train(cfg, trainer, task, epoch_itr) File "/usr/lib/python3.10/contextlib.py", line 79, in inner return func(*args, **kwds) File "/content/fairseq/fairseq_cli/train.py", line 331, in train log_output = trainer.train_step(samples) File "/usr/lib/python3.10/contextlib.py", line 79, in inner return func(*args, **kwds) File "/content/fairseq/fairseq/trainer.py", line 868, in train_step raise e File "/content/fairseq/fairseq/trainer.py", line 843, in train_step loss, sample_size_i, logging_output = self.task.train_step( File "/content/fairseq/fairseq/tasks/speech_to_speech.py", line 504, in train_step loss, sample_size, logging_output = super().train_step( File "/content/fairseq/fairseq/tasks/fairseq_task.py", line 532, in train_step loss, sample_size, logging_output = criterion(model, sample) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/content/fairseq/fairseq/criterions/speech_to_speech_criterion.py", line 362, in forward feat_out, eos_out, extra = model( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/content/fairseq/fairseq/models/speech_to_speech/s2s_transformer.py", line 561, in forward encoder_out = self.encoder( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/content/fairseq/fairseq/models/speech_to_speech/s2s_transformer.py", line 46, in forward out = super().forward(src_tokens, src_lengths, return_all_hiddens) File "/content/fairseq/fairseq/models/speech_to_text/s2t_transformer.py", line 382, in forward x = self._forward( File "/content/fairseq/fairseq/models/speech_to_text/s2t_transformer.py", line 346, in _forward x, input_lengths = self.subsample(src_tokens, src_lengths) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/content/fairseq/fairseq/models/speech_to_text/modules/convolution.py", line 55, in forward x = conv(x) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/conv.py", line 313, in forward return self._conv_forward(input, self.weight, self.bias) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/conv.py", line 309, in _conv_forward return F.conv1d(input, weight, bias, self.stride, RuntimeError: Given groups=1, weight of size [1024, 80, 5], expected input[11, 240, 2630] to have 80 channels, but got 240 channels instead

What is your question?

Code

What have you tried?

What's your environment?

  • fairseq Version (e.g., 1.0 or main):
  • PyTorch Version (e.g., 1.0)
  • OS (e.g., Linux):
  • How you installed fairseq (pip, source):
  • Build command you used (if compiling from source):
  • Python version:
  • CUDA/cuDNN version:
  • GPU models and configuration:
  • Any other relevant information:

merethebest avatar May 31 '23 21:05 merethebest