segfaults using coreml
the version I install using WHISPER_COREML=1 pip install git+https://github.com/absadiki/pywhispercpp seems to segfault
whisper_backend_init: using BLAS backend
whisper_init_state: kv self size = 6.29 MB
whisper_init_state: kv cross size = 18.87 MB
whisper_init_state: kv pad size = 3.15 MB
whisper_init_state: loading Core ML model from '/Users/nilleb/Library/Application Support/pywhispercpp/models/ggml-base.en-encoder.mlmodelc'
whisper_init_state: first run on a device may take a while ...
whisper_init_state: failed to load Core ML model from '/Users/nilleb/Library/Application Support/pywhispercpp/models/ggml-base.en-encoder.mlmodelc'
ggml_metal_free: deallocating
zsh: segmentation fault python sample.py
the sample.py is almost identical to the one in the README
from pywhispercpp.model import Model
model = Model('base.en')
segments = model.transcribe('sample16k.wav')
for segment in segments:
print(segment.text)
I also tried to replace the 1.7.4 version with the one on ggml-org/whisper.cpp @ HEAD, and the result is quite the same
the whisper.cpp version I built independently from this project works fine ./build/bin/whisper-server --model models/ggml-$model.en.bin
(where model can assume the values base and medium at least)
whisper_init_state: failed to load Core ML model from '/Users/nilleb/Library/Application Support/pywhispercpp/models/ggml-base.en-encoder.mlmodelc'
@nilleb, I believe you need to convert the model to mlmodelc and place it in the same directory as the ggml model
You can follow the instructions in the whisper.cpp repo.
I did this! (before posting the issue) in fact I am able to run the ggml-org version with CoreML support. But I am getting segfaults when running this python wrapper. I should definitely dig more into the python-C++ interop. I will!