whisper.cpp
whisper.cpp copied to clipboard
Batch inference
I use whisper.cpp in the project loud.cpp along with pyannote for diarization. However I encounter the following issue: most of the sentences are less than 30s and whisper process them as they were 30s and it makes the transcribe process much slower. How can I batch inference segments in a way that I know which text belong to which batched segment? Thanks
@thewh1teagle can your diarization workflow do the realtime stream example from whisper.cpp?
How? how to do it? Help me please 🥺🥺🥺