CoreML Support for Apple Silicon
I have M3 Max, and whisper-cpp-python doesn't seem to use the core ml feature. If I use whisper-cpp-python and the medium model to transcribe an audio file that's 3 minutes 30 seconds long, it takes 76 seconds. If I use whisper.cpp compiled with CoreML support and transcribe the same audio with the medium model, it takes 22 seconds. If I use faster whisper to transcribe the same audio with the medium model, it takes 69 seconds. How can I enable whisper-cpp-python to use core ml? Thanks so much!
+1
guess your whisper.cpp used in project should be compiled with proper flag, or maybe you could set such env variable. should see for details in the whisper.cpp README.md, looking for word CoreML