insanely-fast-whisper icon indicating copy to clipboard operation
insanely-fast-whisper copied to clipboard

Results 82 insanely-fast-whisper issues
Sort by recently updated
recently updated
newest added

When I run the fellowing codes,the error ocurred. `import torch from transformers import pipeline from transformers.utils import is_flash_attn_2_available pipe = pipeline( "automatic-speech-recognition", model="openai/whisper-tiny", # select checkpoint from https://huggingface.co/openai/whisper-large-v3#model-details torch_dtype=torch.float16, device="cuda:0",...

Thank you for an easy to use CLI ❤️ Currently, if the library runs on CPU-only machine it fails with the following error: ``` RuntimeError: Found no NVIDIA driver on...

Hey there, the timestamps in my tests seem to be much less accurate than similar implementations (whisper.cpp or openai/whisper)

Main work: 1. call speaker diarization pipeline. 2. call flash attetion for whisper based on transformers pipeline. Is there anything else that is missing?

Hi, First of all: **Thank you for sharing something that makes life easier for people**. I find your project very interesting, therefore I tried to run the demo notebook in...

Currently, we are leveraging Pyannotes speaker diarisation. However, there is still scope for improvement here, and we should be able to leverage other open-source packages like NVIDIA NeMo. I'd like...

Hi guys, I’ve been desperately trying to host this model on a Google Cloud container - I’m extremely new to all this, and need your help… I’ve been trying to...

For some audio files, the diarization works, while for others they do not. If I run the audio file that didn't work with only transcription, no diarization, then it works...