ecoute
ecoute copied to clipboard
Feature request: Let user choose local whisper model to use
There are several whisper models available to use locally, perhaps the local env has a beefier GPU that can run a bigger model A switch pointing to a model will probably do Thanks for the awesome work!
This request is implemented in the fork https://github.com/vivekuppal/transcribe
It allows the use of tiny, base, small models as long as the models have been downloaded to the appropriate location.