vllm
vllm copied to clipboard
[Feature]: Support token-level timestamps in whisper models
🚀 The feature, motivation and pitch
Dynamic time warping applied on encoder-decoder cross-attention matrices of whisper models can be used to find a word-level alignment between audio and transcriptions. openai/whisper provides an implementation this in find_alignment that returns timestamps (start and end) for each word in the transcription (here text_tokens).
This has various usecases for us and it would be great to have this capability exposed via vLLM.
Alternatives
- one alternative here is to use the reference impl
find_alignmentfrom python directly, calling it once for each sample in a batch of audio samples (or maybe implement a variantfind_alignmentcapable of handling batch inputs) - whisper.cpp and the code implemented in this PR is also an option
Both options are feasible but:
- require the client/user to run custom python or native code
- both alternatives are neither efficient nor fast for a large number of (possibly concurrent) audio inputs/requests
Additional context
This is the PR for initial whisper support in vLLM but afaik there is no support for alignment yet.
Two more comments looking at the reference impl for find_alignment:
- batching the encoder inference should be easy, whereas decoder batching is probably more complicated (due to flash attention and bookkeeping of the cross-attention matrices)
text_tokenscould be a transcription of the whisper model itself but doesn't have to be (can be any other sequence of tokens, possibly from another model or human-labeled data). As such it would be great if vLLM also supports user-provided inputs for this.
cc @mru4913 @NickLucche