whispercpp
whispercpp copied to clipboard
feat: cuBLAS Support
Feature request
It would be nice to be able to compile with cuBLAS support while installing/building locally. I haven't found a way to do so, but I am also unfamiliar with Bazel so apologies if this is already possible, and if so, how would I go about doing this?
Motivation
This would offload lots of processing from the CPU and onto the GPU speeding up transcribing time considerably for those with a powerful GPU.
Other
No response
I'd also be interested in this. You don't even need a very powerful GPU. My 1070Ti can infer at about 2x faster than realtime on the large model with beam size 1.