Results 82 comments of Yuekai Zhang

> @yuekaizhang is it possible? Yes, it's possible to add timestamp. Currently gpu inference using this [ctc_decoder](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/third_party/ctc_decoders), which needs to modify to add timestamps. Or we could replace it with...

> Do you mind checking this issue: [triton-inference-server/server#3777 (comment)](https://github.com/triton-inference-server/server/issues/3777#issuecomment-1027418452) ? > > There's an invalid pointer issue related to kaldifeat + pybind. @Slyne Hi, I was wondering if you figure...

> Hi @yuekaizhang , i have tried to use add LM with triton decoder, it loads successfully but got no o/p or weird o/p. > > I have tried 2...

What modeling unit you are using? If your are using words, please set the space id accroding to your vocab. If your are using bpe or char, space_id sets to...

@zhao-lun https://github.com/k2-fsa/icefall/blob/master/egs/aishell/ASR/whisper/decode.py#L286-L288, check this to do normalize before computing metrics. If you have some free time, feel free to make a PR to triton-asr-client/client.py.

> I am checking with sequence_batching, and sequence_batching{ max_sequence_idle_microseconds: 5000000 oldest { max_candidate_sequences: 1024 max_queue_delay_microseconds: 5000 } > > why we have 1024 max_candidate_sequences, if we use direct() isn't going...

> @yuekaizhang I have tried both direct and with oldest, and for stream application direct is much better, as my stream app is working on each 10msec. I only have...

Hi, I was wondering if this PR is ready to test or not. I'd like checkout to [csukuangfj:mixed-precision-2022-12-07](https://github.com/csukuangfj/sherpa/tree/mixed-precision-2022-12-07) do a test.

> > Hi, I was wondering if this PR is ready to test or not. > > I have tested it locally but it does not seem to improve the...

> Hi, you could use sherpa/trion/client/decode_manifest.py to decode a whole dataset. This is a reference for benchmarking a Chinese dataset https://k2-fsa.github.io/sherpa/triton/client/index.html#decode-manifests. When server launched, you could use this soar97/triton-k2:22.12.1 pre-built...