Fangjun Kuang

Results 683 comments of Fangjun Kuang

> However encoder onnx model cannot give same output if the input wav is different from the one which is used for exporting onnx model. I think scaling (scale.py) might...

```python3 if torch.jit.is_tracing(): rows = torch.arange(start=time1 - 1, end=-1, step=-1) cols = torch.arange(time1) rows = rows.repeat(batch_size * num_heads).unsqueeze(-1) indexes = rows + cols x = x.reshape(-1, n) x = torch.gather(x,...

@EmreOzkose Great work! Supporting more than one inference framework in `sherpa` makes the code difficult to maintain. Would you mind porting your onnx related c++ code to https://github.com/k2-fsa/sherpa-onnx Thanks!

> Is it a proper way for you? Yes, that looks good to me.

> offline_server.py goes OOM if the audio file is 30+ minutes on a 16GB GPU. No messge is printed on the screen, but when I hit a ^C on the...

By the way, do you still see ping timeout in the offline case if not using waves with long durations?

> In an offline case, I could do only 2 or a maximum of 3 channels at any time. Too many channels caused OOM. But with only 2-3 clients connected...

Could you compare the decoded results among them? You can use `vimdiff` to compare the file `recogs-xxx.txt`. Are there many ``s in sherpa based decoding for TEDLIUM?

Sorry, I have not looked into it yet. I need to reproduce it locally first.

Sorry for the late reply. Will look into it during the holiday.