subsai icon indicating copy to clipboard operation
subsai copied to clipboard

🎞️ Subtitles generation tool (Web-UI + CLI + Python package) powered by OpenAI's Whisper and its variants 🎞️

Results 48 subsai issues
Sort by recently updated
recently updated
newest added

hi, I always get this mistake: "Subs AI: Subtitles generation tool powered by OpenAI's Whisper and its variants. Version: 1.2.3 [-] Model name: openai/whisper [-] Model configs: defaults [+] Initializing...

how to modify the config scheme if i want to use in ``` create_model('openai/whisper', {'model_type': 'base'},model_config={"vad":"auditok"})``` can you give me an example?

I was using faster-whisper and the downloaded model utilized relative symlink (to avoid duplications I suppose) but the webui (or the ctranslate2) doesn't like it: ``` Traceback (most recent call...

Running on Windows 11, CUDA (NVidia RTX 3090) python 3.10.6 Server is run with `subsai-webui --server.maxUploadSize 50000` in case that's relevant. using `faster-whisper` and `large-v2` Trying to process a 1hr...

Ok. I have Windows 10 and PyCharm with python 3.9.13 I used that code from main page for python and of course install subsai package via git And I got...

When I set the model to whisper-timestamped and pressed transcribe, this error showed up. This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly...

requirements.txt points at wrong whisper-timestamped version. After manually removing the @-part after `git+https://github.com/linto-ai/whisper-timestamped` in line 7 and re-running `docker compose build --no-cache .` it works for me

"File must be 200.0MB or smaller." when inputting a file over 200mb, I was hopping it can be transcribed/merged without showing the video if >200mb as I would assume thats...

I have the rtx 3060, but i always get :ValueError: unsupported device cuda:0, when trying to use Whisper X