WhisperLiveKit icon indicating copy to clipboard operation
WhisperLiveKit copied to clipboard

WLK uses large-v3 instead of large-v3-turbo

Open sunarowicz opened this issue 3 weeks ago • 1 comments

Hi, first of all thank you for this excellent tool!

There seems to be one issue. If I run wlk --host 0.0.0.0 --port 8800 --model-path /opt/AI/MODELS/whisper --model large-v3-turbo it doesn't use large-v3-turbo model but large-v3 instead if large-v3 is present in the /opt/AI/MODELS/whisper directory too. I know it by the amount of occupied VRAM. If both models are in referenced directory, WLK takes approx. 10 GB of VRAM no matter which model from these two I set in command line. But if I set large-v3-turbo to be used and remove large-v3 from the dir, WLK takes less han 6 GB of VRAM as it should for the turbo model.

> mamba list | grep whisper
  faster-whisper            1.2.0        pypi_0              pypi       
  whisperlivekit            0.2.16.dev0  pypi_0              pypi

sunarowicz avatar Dec 05 '25 12:12 sunarowicz

Hi, if you use --model-path, --model is ignored. And the parameter --model-path uses the first model that it finds in the folder, so probably large-v3 in your case. The --model-path parameter is quite a challenge honestly, given the variery of things (different frameworks, different filenames and extensions etc) that can be found in a folder

Do you have a reason for why you need both models in the same folder ?

QuentinFuxa avatar Dec 05 '25 15:12 QuentinFuxa

I see now, the meaning of --model-path is different than I thought. My bad, sorry. I have been using --model-path because I found --model_cache_dir not working as described in docs/default_and_custom_models.md. If I have (larger) models in ~/.cache/whisper, everything works fine. But if I move this directory to somewhere else, say to /opt/AI/MODELS/whisper and I use the following parameter --model_cache_dir /opt/AI/MODELS/whisper, wlk doesn't use (larger) model from there, but creates ~/.cache/whisper and starts to download it to there instead. So it essentially ignores the --model_cache_dir parameter.

Therefore I tried to use --model-path which seemed to work until I found the problem reported in my first post. Now I understand I misused this parameter for this purpose.

So I think the real issue is that model is not taken from dir specified in --model_cache_dir, but from ~/.cache/whisper instead.

I guess --model_cache_dir might be meant as replacement for ~/.cache/huggingface from where wlk takes the smaller models according to my experience (tiny, base, small if I remember well). But the documentation says it overrides ~/.cache/whisper from where wlk takes larger models.

I try wlk in various scenarios, therefore I test it with more models and would like to have these stored in the same directory.

sunarowicz avatar Dec 08 '25 10:12 sunarowicz