faster-whisper
faster-whisper copied to clipboard
mkl_malloc: failed to allocate memory
Running
model = WhisperModel("large-v3", device="cuda", compute_type="float16")
gives me:
mkl_malloc: failed to allocate memory
I have 32 GB of installed DDR5 memory and I'm running this on a 4080 with 16 GB of VRAM. I'm not really sure how this error keeps coming up, since I have tons of RAM. Anyone else experiencing/experienced the same thing?
@samuelbraun04 Mine has started doing this was well occasionally after working fine previously. Did you find a resolution? My guess is torch versions.
@unmotivatedgene Unfortunately I never figured a fix out, so I thought maybe it was just something I messed up on my end. But it's coming out of nowhere again for me and I haven't changed anything, so now I'm just confused. And I'm literally running this on a 4080 so I really don't get how I could be running out of memory.
@unmotivatedgene Unfortunately I never figured a fix out, so I thought maybe it was just something I messed up on my end. But it's coming out of nowhere again for me and I haven't changed anything, so now I'm just confused. And I'm literally running this on a 4080 so I really don't get how I could be running out of memory.不幸的是,我从来没有想出一个解决办法,所以我想也许这只是我搞砸了。但是它对我来说又是无处不在的,我没有改变任何东西,所以现在我只是感到困惑。而且我实际上是在 4080 上运行它,所以我真的不明白我怎么会用完内存。
for new version 1.0.2, I get this error everytime, but it will work when I use Int8 type load model, Did you find a resolution?
mk1_malloc: failed to allocate memory
yes, I am experiencing the same issue. Has anyone found a resolution?