Masao Someki

Results 58 comments of Masao Someki

@sanjuktasr To obtain the same result, I think we need to use the batch_beam_search_online (https://github.com/espnet/espnet/blob/master/espnet/nets/batch_beam_search_online.py)

Hi @sanjuktasr and @espnetUser, thank you for your reports; I fixed streaming-related bugs in #83. I removed BatchedBeamSearch in the `end` function in this PR because we do not necessarily...

@sanjuktasr Thank you, it looks like the final look-ahead tensor is not recognized. I think we need to modify the following line to include the final look-ahead tensor. https://github.com/espnet/espnet_onnx/blob/46b06f129167c8e27fb36e4ddf15bfe50420f5f2/espnet_onnx/asr/asr_streaming.py#L132-L136 to...

@sanjuktasr > Also since the encoder has 2 dec places precision can these kind of anomalies be expected? Am I correct that `abs(torch_output - onnx_output)` is larger than 0.01 for...

Hi @jinggaizi, GPU inference of quantized model is not supported on onnxruntime, that's why it is slow.

Hi @v-yunbin , I fixed the issue with #77, so would you try this again and check if your issue is fixed?

Hi @yangyi0818, Thank you for reporting the issue! About the first point, I would like to know the following information: - What is your device? CPU or GPU? - Am...

Thank you! About the RTF, it may be a problem with the frontend process. If you are using the default frontend, which contains stft and logmel, is it possible to...

@joazoa You can limit the number of threads with the following options: - `inter_op_num_threads = 1` - `intra_op_num_threads = 1` Currently, there is no script to limit the number of...

Hi @rajeevbaalwan I would like to confirm some points: - Would you tell me which encoder you use in your model? - Did you observe any similarities between them?