web-llm icon indicating copy to clipboard operation
web-llm copied to clipboard

Whisper in web-llm with WebGPU?

Open sandorkonya opened this issue 2 years ago • 4 comments

Great Repository!

Is it within your scope to implement a webGPU accelerated version of Whisper?

Not sure if this helps, but there is a C port for Whisper wirh CPU implementation, and as mentioned in this discussion, the main thing that needs to be offloaded to the GPU is the GGML_OP_MUL_MAT operator.

thy

sandorkonya avatar Apr 25 '23 09:04 sandorkonya

great suggestion, yes this is something that we can push for

tqchen avatar Apr 25 '23 14:04 tqchen

@tqchen my ultimate goal would be to get it run the most efficient way on android edge device.

Although there is already a solution in the onnx framework onnx framework, based on the recent merge, but i am not sure when it will be usable on android.

There were some who tried with GPU delegates, but no success yet.

Any idea how one could solve it on the edge (Android) device?

sandorkonya avatar Apr 25 '23 18:04 sandorkonya

There is also a demo of Whisper running via WebAssembly in that repo. https://github.com/ggerganov/whisper.cpp/tree/master/examples/talk.wasm

DustinBrett avatar Apr 26 '23 04:04 DustinBrett

There is also a demo of Whisper running via WebAssembly in that repo. https://github.com/ggerganov/whisper.cpp/tree/master/examples/talk.wasm

Yes, it runs on CPU. I hope, that with a GPU version one could reach real time inference.

sandorkonya avatar Apr 26 '23 07:04 sandorkonya