whisper.cpp
whisper.cpp copied to clipboard
Port of OpenAI's Whisper model in C/C++
``` whisper.cpp git:(master) make stream c++ -I. -I./examples -O3 -std=c++11 -pthread examples/stream/stream.cpp ggml.o whisper.o -o stream `sdl2-config --cflags --libs` -framework Accelerate In file included from examples/stream/stream.cpp:12: In file included from...
Are there any plans / would it be possible to use pybind11 to make a python library to enable easy use of live streaming audio in python? I was considering...
[WIP in progress] With the idea in #137 it is possible to reduce the time in the encoder multiple times. This is beneficial for the `stream` example, because it already...
Hi,ggerganov. I really appreciate for your effort. I'd like to make .net application using the dynamic dll. I realized that whisper is compiled the static library when I built using...
One of my teammates compiled an exe and it was working quite well on most of our other Windows systems, so I thought I'd share it for those having difficulties...
ref #154 [WIP in progress]
I just had an awesome idea: Make a web-page that: - Listens when someone speaks - Transcribes the words using [WASM Whisper](https://github.com/ggerganov/whisper.cpp/tree/master/examples/whisper.wasm) - Generates a new sentence using [WASM GPT-2](https://github.com/ggerganov/ggml/tree/master/examples/gpt-2)...
Is it possible to have the convert script support hugginface format like the one here https://huggingface.co/openai/whisper-medium/tree/main ? The usecase is to run fine tuned models with cpp.
I am trying to make this work for a personal project without translating every time to English. I just want it to generate the subtitles with the detected language.
When I give an audio file with mixed-language content (e.g. English and Japanese) as an input, I can't seem to get the transcript in both languages as they were spoken....