Purfview
Purfview
Sad news, the tests shows that "Faster-Whisper CUDA v12" has -10% drop in performance, so, stay with CUDA v11. RTX 3050 GPU: ``` float16: -10% drop in speed bfloat16: -8%...
@Qubitium I think "546.33" and other stuff is currently latest official versions. On Windows.
> Check to disable nvidias "virtual" vram gpu feature they introduced in 12.x in windows which auto swap vram to host ram. Lots of users got caught with this killing...
Do you have benchmarks CUDA12 vs CUDA11 in Linux? Stats at my repo shows only 3% Linux users...
The bleeding edge builds doesn't go to PyPi packages, there it goes by the stable versions.
> Wow, what a toxic environment here! BBC-Esq, this request about CUDA 12 is legit, it's not a "spam". Obviously it's spam. And BBC-Esq is known toxic troll & spammer.
It's hard to tell anything without the audio.
Yesterday a similar issue was posted on my repo -> https://github.com/Purfview/whisper-standalone-win/issues/188 Sometimes changing compute type or beam size triggers a model to transcribe those missing lines. Sometimes nothing helps and...
> Now, I noticed that the transcription behavior has changed a lot in version 1.0.0. I noticed it only happening in the last chunk. [I tested with old PyAV and...
@kale4eat Actually, I forgot that https://github.com/SYSTRAN/faster-whisper/commit/00efce1696c21310bbdfd58433adfc8d44c2edbc & https://github.com/SYSTRAN/faster-whisper/commit/ebcfd6b9646f5176fba8b7f3429d0de28a70192c bugfixes were made after the 0.10.0 version. So, differences can come from these too if you use the repo by versions.