✌️ Mohamed Anwar

Results 25 comments of ✌️ Mohamed Anwar

So, theoretically commenting these two assertions won't affect the performance... right? And changing the tensors to `contiguous` will just help a little bit with memory?

I will be out for a month starting from this day. I will start working on Sphinx starting from tomorrow and I have another work to do that is much...

Hi @mravanelli, the following is a comparison between the `torchaudio.rnnt_loss` and the `fast_rnnt.pruned_loss` using different prune ranges: 5, 40, and 115 using my current implementation on the mTEDx-Fr dataset with...

Yes, @mravanelli! I'm just waiting for a few more epochs before reporting the results.

Hi @danpovey @mravanelli, the model's CER & WER didn't get any better even after using warmup. I trained two models on the same dataset using the same hyper-parameters (`prune_range=5`); one...

@danpovey, really appreciate your quick responses. And sorry about that. I should've provided more details. > What warmup schedule did you use, i.e. how many batches does the warmup last?...

Hi, @anautsch! Yes, there are some that I was intending to add but didn't have the time. I think I can add them to this PR by this weekend inshallah.

Hi @anautsch , I have updated this PR with the latest changes as promised. Also, updated the [PR description](https://github.com/speechbrain/speechbrain/pull/1465#issue-1280460546). Please, feel free to get back to me if you have...

Hi @anautsch , I think all above problems are resolved now.

I agree, using `BART` is more suitable for that when you set `match_source_len=False`. Load BART base model ``` bart = torch.hub.load('pytorch/fairseq', 'bart.base') #takes around two minutes bart.eval() # enable evaluation...