vllm
vllm copied to clipboard
Whisper support
Is support for Whisper on the roadmap? Something like https://github.com/ggerganov/whisper.cpp would be great.
Supporting encoder-decoder models is in our roadmap as mentioned in #187. Feel free to join the discussion and potentially contribute!
+1 for this feature
+2 for this feature
+3 for this feature
+4 for this feature
+555
+1
+1
monitoring
@zhuohan123 I am working on Whisper support.
NO WAY!!!!!!!!!!!!!!!!!!! THAT WILL BE AWESOME!!!!!!!!!!!!!!!!!!!!!
I am working on this PR, and will soon submit the draft.
THIS IS GOING TO BE HUGE, THX!
Hey @libratiger, together with @afeldman-nm I am now working full-time on the same target. Would you like to sync? It would be more efficient to share knowledge, rather than develop the same thing in two silos.
You're right. I've just discovered a discussion about T5 https://github.com/vllm-project/vllm/issues/187#issuecomment-1825244021 , where there are differing opinions on the encoder-decoder model. Perhaps it will improve after that PR is merged?
@libratiger the current status is as follows: neural magic has finalized the original T5 PR, and we are now benchmarking the solution. In parallel, we are also developing support for Whisperer.
@dbogunowicz any update on this issue? looking forward
Hi! I am working on the Whisper on our team fork: https://github.com/neuralmagic/nm-vllm/pull/147 The status is: I am running the inference (both prompt prefill as well as autoregressive inference), but I get correctness issues, most likely caused by the erroneous attention mask implementation.
@dbogunowicz I ran the feature/demian/Whisper branch to run the Whisper model and found an error message: vllm/worker/model_runner. py, line 477, in prepare_decode Multi_modeal_input) NameError: name 'multi_modal_input' is not defined, code execution cannot start
@junior-zsy fixed for now. Please remember, that we are still working on that PR, so it's pretty much in WiP state. Let me explicitly set the appropriate PR flag.
@dbogunowicz Ok, thank you. Hope it can be used soon
same here, this is going to be really cool!
@dbogunowicz thanks for your work on Whisper! Since there is clearly interest in this feature and its completion timeline, I want to add the context that Whisper support takes a dependency on encoder/decoder support -
Issue: https://github.com/vllm-project/vllm/issues/187 PR: https://github.com/vllm-project/vllm/pull/3117
which is also WIP (currently works partially but is not quite complete.) I expect to complete encoder/decoder support soon. JFYI for anyone interested in timelines.
+1
See the encoder/decoder support issue (https://github.com/vllm-project/vllm/issues/187) and new PR (https://github.com/vllm-project/vllm/pull/4289) for a status update on encoder/decoder support, which is a prereq for Whisper support.
Hi, any update on serving faster-whisper via VLLM?
Hi, any update on serving faster-whisper via VLLM?
Hi @twicer-is-coder ,
Whisper (or any variant thereof) is high of the list of models to add once infrastructure support is in; you can see the roadmap for infrastructure support in this PR:
https://github.com/vllm-project/vllm/pull/4942
FYI, encoder decoder support landed in #4942 and there is an RFC ( #7366 ) for follow-on encoder/decoder-related tasks, including adding Whisper support; feedback period is until August 16th. See https://github.com/vllm-project/vllm/issues/187#issuecomment-2278777339
are you kidding me? is whisper supported now by vllm?
are you kidding me? is whisper supported now by vllm?
Adding Whisper support will hopefully follow shortly now that we have the encoder/decoder infrastructure landed. This is part of the RFC.