whisper.cpp
whisper.cpp copied to clipboard
Core ML model does not seem to be working in iOS app
I was able to integrate the coreml model in the app and the log says, "Core ML Model Loaded" (please see the attached screenshot for reference) but when I am running the transcription using the function whisper_full, I am not seeing any difference in speed as I am seeing on my terminal. I have transcribed for a 40 secs audio. Below are the runs I gave for the transcriptions.
This screenshot is when the Core ML Model got loaded.
Any help would be appreciated.
How much is the encode time, ms per run
without WHISPER_COREML
?
Check only the first run, as in this screenshot:
Ok I just checked the difference in speed is there. It is taking 160 secs to encode this audio without coreml in the app but when I ran the same audio in the terminal, the difference is far greater. So, I want to know why my app is not performing as such. Here are the results.
First run:
Second run:
How did you enable coreml? Is it a build flag?
How did you enable coreml? Is it a build flag?
Yes, I had to add it in the .h file to get it working in the iOS app.
How did you enable coreml? Is it a build flag?
Yes, I had to add it in the .h file to get it working in the iOS app.
Yep, figured it out. Have to include the files under /coreml in the app as well!
@ahsanzia341 Hi, could you show some code that how used CorlML on iOS? thanks very munch~