Shinji Watanabe
Shinji Watanabe
@D-Keqi, can you answer it for me? About the BatchBeamSearchOnline, Is this comment true? I think it is only for BatchBeamSearchOnlineSim
@eml914, can you answer the rest of the questions for us?
In many cases, the `lang` parameter is simply used as a model attribute. It is not used as a model parameter. Some recipes use the language ID as a pre-fix...
We can do it with `sox`, and also we can write the sox command in `wav.scp`. What do you think?
Any progress?
What kind of cluster environments are you using? You may need to change https://github.com/hitachi-speech/EEND/blob/master/egs/mini_librispeech/v1/cmd.sh based on your environment accordingly. Check https://kaldi-asr.org/doc/queue.html @yubouf, I strongly recommend to add more documents about...
Oh, I see. Can you set CUDA_VISIBLE_DEVICES explicitly then?
`CUDA_VISIBLE_DEVICES=0`?
Did you tune the learning rate etc.? Also, I suggest you (virtually) increase the batch size through `accum_grad ` https://github.com/espnet/espnet/blob/master/egs2/librispeech/asr1/conf/tuning/train_lm_transformer2.yaml#L17
> I turned off the `accum_grad` `accum_grad` is to increase the batch size practically to make your trial similar to the large GPU memory trial. You said that your batch...