ecoute
ecoute copied to clipboard
Very Slow Load times, unlike the Demo Video
Hi, While using the app the input and output loads very slow. its nothing like the demo video.
I have tried reducing the update time to 1 second still its not even close to being as fast the demo video.
Note - I am on free trial credits of the OpenAI API.
Am I missing something?
Is the transcription or the response slow? The transcription is done on your GPU unless your GPU is incompatible the it uses your CPU, which could be slow. Could you provide the console output?
Thanks, I just read about how Whisper works. I thought it worked like gpt-turbo via API but now I understand it works locally to do the transcription.
My notebook has an Intel Iris XE which is an integrated graphics processor. Is that not compatible?
How can I make this work in my situation? Can you help. ( Please consider I am not a tech guy so your guidance, or pointing me in the right direction to make the transcription work in real time will be really helpful as I wont be able to figure it out by myself.)
The Intel Iris XE will not support Whisper GPU. Unfortunately, there is no simple solution to your issue. You could consider modifying the get_transcription method to use some API (somehow) instead of the loaded Whisper model.