Peter Plantinga
Peter Plantinga
The best place to start is with the `speechbrain.nnet.adapters` which already contains the code needed to select layers and replace them
Follow-up issue: the LM and tokenizer are no longer linked by default, meaning that even with this issue fixed, we will still have another issue which is to make these...
Which brings up the issue, should we keep the new default (NO_LINK) for pretrainer or should we revert this to SYMLINK https://speechbrain.readthedocs.io/en/latest/API/speechbrain.utils.parameter_transfer.html#speechbrain.utils.parameter_transfer.Pretrainer.collect_files When I suggested changing the default to NO_LINK...
Some of these should get implemented when we get a chance. My thoughts in response to your proposal: > I wonder if the following semantics **compared to v1.0.1** (not the...
Solved by #2711
Perhaps one thing we could do here is move the core changes to another PR: i.e. the four core (non-lobes) files in `nnet`, `utils`, and `loss`. We could merge this...
I think this sort of recipe is sorely needed for open-source research. NeMo has a similar recipe that is not open-sourced for their ASRset. I'm wondering if this recipe could...
Good to know that dynamic shapes and FFT are supported, this was a major blocker for a number of SpeechBrain models. If there are any changes to the model we...
Dictionaries should be fine right? From https://pytorch.org/docs/stable/generated/torch.load.html: "weights_only – Indicates whether unpickler should be restricted to loading only tensors, primitive types, dictionaries and any types added via [torch.serialization.add_safe_globals()](https://pytorch.org/docs/stable/notes/serialization.html#torch.serialization.add_safe_globals). See [torch.load...