Quentin Fuxa
Quentin Fuxa
> always busy. - I love it. - I love it. There is so much to see and do. - I like my city as well. - But that is...
Okay thank you for the precision. So (part of) the issue is that you use `wlk --model large-v3-turbo --language zh-CN -l INFO --warmup-file ./jfk.wav` If you look at https://github.com/QuentinFuxa/WhisperLiveKit/blob/main/docs/supported_languages.md ,...
Hi, yeah the model struggles when it does not have enough tokens to work on (beginning of sentences). I plan to do a fix on that
Work in progress, will be in the next release
Hi, a fix for that has been made in 0.2.11.post1 version
Hi, that is probably unrelated, I will look at it
You can develop a frontend in any language, as long as you use the websocket /asr endpoint. It is not in the project's plans to develop a native frontend for...
Yeah, I dont have several gpus to test, so I am not sure
Hi, this error means your system can’t find a compatible cuDNN library. Faster-Whisper requires that your CUDA + cuDNN versions match the PyTorch/CTranslate2 builds you've installed. On Arch/CachyOS the easiest...
Hi, yess having a stable (and faster) translation is work in progress, will be in the next release