fairseq
fairseq copied to clipboard
RuntimeError
Given groups=1, weight of size [1024, 80, 5], expected input[11, 240, 2630] to have 80 channels, but got 240 channels instead
Code
fairseq-train $DATA_ROOT
--config-yaml config.yaml --multitask-config-yaml config_multitask.yaml
--task speech_to_speech --n-frames-per-step 5
--criterion speech_to_spectrogram
--arch s2spect_transformer_fisher --decoder-normalize-before
--dropout 0.1 --attention-dropout 0.1 --relu-dropout 0.1
--train-subset train --valid-subset dev
--save-dir ${MODEL_DIR}
--eval-inference --best-checkpoint-metric mcd_loss
--lr 0.0005 --lr-scheduler inverse_sqrt --warmup-init-lr 1e-7 --warmup-updates 10000
--optimizer adam --adam-betas "(0.9,0.98)" --clip-norm 10.0 --weight-decay 1e-6
--max-update 400000 --max-tokens 80000 --max-tokens-valid 30000 --required-batch-size-multiple 1
--max-target-positions 3000 --update-freq 16
--seed 1 --fp16 --num-workers 8
What have you tried?
What's your environment?
- fairseq Version (e.g., 1.0 or main):
- PyTorch Version (e.g., 1.0)
- OS (e.g., Linux):
- How you installed fairseq (
pip, source): - Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information: