Sangchun Ha

Results 35 comments of Sangchun Ha

@yunigma I think it's a subword related issue rather than librispeech dataset. I've confirmed that I'm learning with kspon_character, so How about trying it out with libri_character?

@yunigma I think I trained for about 36 hours. Detailed parameters are written in the log. I wonder what might have made the difference. 😒 I'll test it out when...

@Narsil @devxpy @ArthurZucker I also did finetuning without timestamps, and now I have an issue where timestamps are not appearing. Is there a good way to finetune and include timestamps?...

@ArthurZucker @sanchit-gandhi Thank you so much for the detailed explanation. I'm trying to download a new tokenizer, but it seems like it was updated 5 months ago. Can I get...

@hollance Thanks for adding a nice feature. I know that using the cross attention weight to get the token level timestamp. Then, I think there is no dependence between doing...

@oh-young-data Please check the issue. https://github.com/openspeech-team/openspeech/issues/151#issuecomment-1094997018

@girlsending0 The code was written in `pytorch-lightning==1.14.0`. cc. #186 Thank you.

@Seoyoung-Jo ν™•μΈν•΄λ³΄κ² μŠ΅λ‹ˆλ‹€. κ°μ‚¬ν•©λ‹ˆλ‹€.

@sdeva14 hello, this is my training environment. I hope this was helpful to you. ``` absl-py==1.2.0 aiohttp==3.8.3 aiosignal==1.2.0 antlr4-python3-runtime==4.8 appdirs==1.4.4 astropy==5.1 asttokens==2.0.8 async-timeout==4.0.2 attrs==22.1.0 audioread==3.0.0 backcall==0.2.0 bleach==5.0.1 cachetools==5.2.0 certifi==2022.9.24 cffi==1.15.1...

@ChaofanTao Thank you for reporting the issue. I will check and leave a comment.