Mitchell DeHaven

Results 8 comments of Mitchell DeHaven

@busishengui Did you ever resolve this issue? Having similar issues on a different dataset.

@Giovani-Merlin Is that repo still active? The link is now dead.

@bzp83 The `--patience` flag essentially tells you how many epochs can elapse without an improvement in the best validation loss before it terminates training. I was using a patience of...

STFT is onnx exportable, you just need `return_complex=False` in the torch.stft definition (as onnx supports STFT, but not with complex values).

> Using the following repro: > > ```python > from speechbrain.pretrained import EncoderDecoderASR > import torch > > asr_model= EncoderDecoderASR.from_hparams(source="speechbrain/asr-wav2vec2-transformer-aishell", savedir="pretrained_models/asr-transformer-aishell") > asr_model.eval() > wavs = torch.rand(1,34492) > wav_lens =...

Has anyone gotten this code working that could give the steps required to get it training?

> `distil-small.en` is released here: https://huggingface.co/distil-whisper/distil-small.en > > It's quite hard to compress further than this without loosing WER performance: https://huggingface.co/distil-whisper/distil-small.en#why-is-distil-smallen-slower-than-distil-large-v2 Is there any way we can access the small...

> > Is there any way we can access the small 2-layer decoder variant? > > Yes, _c.f._ https://huggingface.co/distil-whisper/distil-small.en @sanchit-gandhi From https://huggingface.co/distil-whisper/distil-small.en: > While distil-medium.en and distil-large-v2 use two decoder...