Piotr Żelasko
Piotr Żelasko
Sorry, had to de-prioritize it to take care of other stuff. I will eventually get back to it.
I don't think I have anything to indicate the git hash in Lhotse, not sure if there's a way to do it in general for pip-installed packages (think installed either...
Cool! I'm adding dev/test other in #134
It’s the 100h subset > Wiadomość napisana przez rickychanhoyin ***@***.***> w dniu 3/29/21, o godz. 06:53: > > > are these results from train-clean100 or the fullset of librispeech...
Yeah, I changed full-libri to be true by default after you suggested that we move to it. I might have forgotten to announce it though. About coding style/abstractions/code structure: honestly,...
FYI I checked how long does it make sense to train our conformer on full LibriSpeech (at least with the current settings) -- I ran it for 30 epochs; the...
I used 5 for both. These differences are so small that I'm not convinced it has a real effect. The last row you have shown is no averaging, right? BTW...
Ah, okay — thanks, that makes it clear for me. I didn’t think to try averaging with so many models, apparently it helps a bit.
I'm a bit familiar with the huggingface/transformers library, the way they do it is by matching the module names, and loading the weights from the checkpoint only for the matching...
That's a cool trick. Why does it work?