Max Bain
Max Bain
> Will be whisperX take any advantages of this? I found whisper jax to use crazy amounts of GPU memory (48GB?), and also led to worse transcription quality. Anyway, I...
> I hope that I won't exceed over the 10GB since I'm limited by my 3080 I haven't benchmarked it but you should be able to get memory requirements down...
@guillaumekln thanks for faster-whisper! I was previously using a custom implementation but yours really speeds up beam_size>1 and reduces gpu mem reqs 👌🏽 Yes there are a few limitations /...
> Hi thanks. Any pointers to a minimal amount of code required to wrap faster-whisper for adding support for this? @ozancaglayan The main branch does exactly this > Not sure...
Ah thanks for reporting this, I hadn't considered this with the language detection logic. Isn't the audio detected per audio file not per chunk? Are you referring when transcribing many...