Sangchun Ha

Results 35 comments of Sangchun Ha

Hello, @OleguerCanal Currently we are not providing it. Also I know that E2E timestamps (including CTC decoder) perform relatively poorly, How was your experience using CTCBeamDecoder?

Thanks. I'll check it out as soon as possible.

@YuXI-Chn I think I missed it. Thanks for letting me know. I'll fix it as soon as possible.

안녕하세요 @sonofbit ! 현재 구현되어있는 inference 과정은 없고, 빠른 시일내에 추가하도록 하겠습니다. 감사합니다.

Hello @yunigma ! I think there is an error using the `cross_entropy` criterion for the transducer model. Would you like to use the criterion as `transducer`?

Well, Would you like to experiment by modifying the gradient accumulation parameter? [[link]](https://github.com/openspeech-team/openspeech/blob/main/openspeech/dataclass/configurations.py#L184-L186) I think your batch size is 16, so it might be a good idea to set `accumulate_grad_batches`...

@yunigma I'll have to do some more testing. I'm so sorry... 😭

Prediction is not supported yet. We will add more later.

@wuxiuzhi738 I'm so sorry for late reply. This is not a language model, it is a script that trains the conformer encoder and lstm decoder together.