warp-rnnt icon indicating copy to clipboard operation
warp-rnnt copied to clipboard

CUDA-Warp RNN-Transducer

Results 18 warp-rnnt issues
Sort by recently updated
recently updated
newest added

@1ytic Hi, So far I have been able to use the loss with DDP on a single GPU , it behaves more or less as expected. But when I use...

![image](https://user-images.githubusercontent.com/20240391/118402254-b3f07b00-b69b-11eb-9462-b2bba5c4e6e1.png) ![image](https://user-images.githubusercontent.com/20240391/118402293-db474800-b69b-11eb-8d4d-b6c6ab1264c8.png) what does the mismatch mean? my_env. python=3.7.0. torch=1.6.0. cuda=10.2.

I got the error which is shown as follows. python=3.8 torch verison=1.10.2 cudatoolkit=10.2.89 CUDA version=10.2. The GCC version is 5.4.0 ![1647595370(1)](https://user-images.githubusercontent.com/74249633/158976322-aeb22a90-1a6d-44a4-b95f-ffa11a8b95ba.png)

```python log_probs (torch.FloatTensor): Input tensor with shape (N, T, U, V) where N is the minibatch size, T is the maximum number of input frames, U is the maximum number...

/***/warp-rnnt/pytorch_binding/warp_rnnt/_C.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZNSt19basic_ostringstreamIcSt11char_traitsIcESaIcEEC1Ev

The **compact layout** in memory can be explained with this figure. The input to `rnnt_loss()` is of size `(N, T, U+1, V)=(3, 6, 6, V)` in a normal layout. Colored...

enhancement

I want to have a stable loss which is rubust to labels_lengths when training. What value should I pass to this two parmas? What's more, what is the approximate relationship...

The warning messages occasionally thrown out during training, ``` ... WARNING: sample 10 [81, 25] has a forward/backward mismatch -0.000083 / -0.000083 ... WARNING: sample 11 [62, 28] has a...

Hi, I'm using rnnt-loss and pytorch-lightning to train my model. But I found the 4D tensor which is used to calculate transducer will be accumulated in GPU, when I check...

I search the error in the title,it happens when there are several losses.But in my code,there is only the RNN-t-loss,but it gives the error. The full sentence is ''RuntimeError: Trying...