pywhispercpp icon indicating copy to clipboard operation
pywhispercpp copied to clipboard

Does Pywhispercpp support batching and what gives if not?

Open BBC-Esq opened this issue 1 year ago • 11 comments

See here...start thinking about true batching.. 😉

https://github.com/shashikg/WhisperS2T/issues/33

BBC-Esq avatar Sep 06 '24 21:09 BBC-Esq

@BBC-Esq, are you talking about batch decoding? If whisper.cpp supports it, then I believe it will be supported here as well.

absadiki avatar Sep 06 '24 22:09 absadiki

@BBC-Esq, are you talking about batch decoding? If whisper.cpp supports it, then I believe it will be supported here as well.

I think he means batch prepping?

Edit:

Nope, batch transcribing!

UsernamesLame avatar Sep 07 '24 15:09 UsernamesLame

cough

import os
from pywhispercpp.model import Model
import multiprocessing
from glob import glob

files = [f for f in glob("*") if os.path.isfile(f) and not f.endswith((".py"))]

def transcribeFile(file, queue):
    model = Model("base")
    segments = model.transcribe(file)
    queue.put([file, segments])
    return True

if __name__ == "__main__":
    queue = multiprocessing.Queue()
    processes = []
    for file in files:
        process = multiprocessing.Process(target=transcribeFile, args=(file, queue))
        processes.append(process)
    
    for process in processes:
        process.start()

    for transcriptions in iter(queue.get, None):
        print(transcriptions)

@BBC-Esq @abdeladim-s here's some simple code to batch process with multiple independent whisper instances to ensure context is not maintained under any circumstances between whisper instances.

~~It doesn't do them in parallel as of now due to calling join, but that's fine. It's still batching it. I'll clean this up later.~~

Cleaned it up. Fixed it running in parallel. And oh boy is it a CPU killer.

UsernamesLame avatar Sep 07 '24 22:09 UsernamesLame

cough

import os
from pywhispercpp.model import Model
import multiprocessing
from glob import glob

files = [f for f in glob("*") if os.path.isfile(f) and not f.endswith((".py"))]

def transcribeFile(file, queue):
    model = Model("base")
    segments = model.transcribe(file)
    queue.put([file, segments])
    return True

if __name__ == "__main__":
    queue = multiprocessing.Queue()
    processes = []
    for file in files:
        process = multiprocessing.Process(target=transcribeFile, args=(file, queue))
        processes.append(process)
    
    for process in processes:
        process.start()

    for transcriptions in iter(queue.get, None):
        print(transcriptions)

@BBC-Esq @abdeladim-s here's some simple code to batch process with multiple independent whisper instances to ensure context is not maintained under any circumstances between whisper instances.

~It doesn't do them in parallel as of now due to calling join, but that's fine. It's still batching it. I'll clean this up later.~

Cleaned it up. Fixed it running in parallel. And oh boy is it a CPU killer.

So a quick heads up, it is painfully slow to do this in parallel. Like dog slow and the more files you throw at it, the slower it gets. But this is just POC code. There's room for improvement such as batching based on file length, file size, core counts, etc.

I'll see if I can beat a few optimizations out of this.

Edit:

I completely forgot that with multiprocessing, queues must be emptied before the main process can finish as they hold open pipes.


    while not queue.empty():
        print(queue.get())

Quick fix over iter. I forgot iterating over a queue is non destructive while calling get is destructive

UsernamesLame avatar Sep 07 '24 22:09 UsernamesLame


import os
from pywhispercpp.model import Model
import multiprocessing
from glob import glob
import asyncio

files = [f for f in glob("*") if os.path.isfile(f) and not f.endswith((".py"))]

def transcribeFile(file, queue):
    model = Model("base")
    segments = model.transcribe(file)
    queue.put([file, segments])

if __name__ == "__main__":
    queue = multiprocessing.Queue()
    processes = []

    for file in files:
        process = multiprocessing.Process(target=transcribeFile, args=(file, queue))
        processes.append(process)
        process.start()

    for process in processes:
        process.join()

    while not queue.empty():
        print(queue.get())

This is where I am at. It can queue up lots of files to do in parallel. But there's no limits on how many, that needs improvement. I also need to make it accept adding new things to its queues.

UsernamesLame avatar Sep 07 '24 23:09 UsernamesLame

If you're after serial batch transcriptions:


from pywhispercpp.model import Model
import os
from glob import glob


if __name__ == "__main__":
    files = [file for file in glob("*") if os.path.isfile(file) and not file.endswith((".py")) and not file.endswith((".cfg")) and not file.endswith(".txt")]

    for file in files:
        model = Model("base")
        segments = model.transcribe(file)

        with open(f"{file}-transcription.txt", "w") as f:
            for segment in segments:
                f.write(segment.text)

UsernamesLame avatar Sep 08 '24 00:09 UsernamesLame

@UsernamesLame, That's multi-processing. The scripts look work great :+1:

absadiki avatar Sep 08 '24 05:09 absadiki

Unfortunately, as @abdeladim-s knows, I can't get pywhispercpp to even install correctly...

BBC-Esq avatar Sep 08 '24 12:09 BBC-Esq

@UsernamesLame, That's multi-processing.

The scripts look work great :+1:

It's "batch" processing 😅

UsernamesLame avatar Sep 08 '24 13:09 UsernamesLame

Unfortunately, as @abdeladim-s knows, I can't get pywhispercpp to even install correctly...

Dump logs. Let's get this working.

UsernamesLame avatar Sep 08 '24 13:09 UsernamesLame

Logs dumped and now I'm flushing the toilet. 😉 jk Won't have time today as I'm working on the benchmarking repo for a bit...need to get an appropriate dataset and then learn/use the jiwer library? lol

BBC-Esq avatar Sep 08 '24 14:09 BBC-Esq