self-supervised-speech-recognition icon indicating copy to clipboard operation
self-supervised-speech-recognition copied to clipboard

memory leak with stt.py

Open tensorfoo opened this issue 3 years ago • 1 comments

Hi, If you do an inference with a Transcriber object t.transcribe(..), after returning the result, it should release any resources related to that inference. But it stays on VRAM and after a few calls to t.transcribe(), I get CUDA related out of memory errors. Looking at nvidia-smi it shows the memory is still occupied even after the transcript has been returned.

It would be nice to have a long lived Transcriber object which can be reused avoiding the lengthy creation time. If you're busy please give me a hint on how it may be done so I can give it a shot and submit a PR. Thanks for your project.

tensorfoo avatar Jul 17 '21 07:07 tensorfoo

hi, I have the same problem, have you slove it?

Hi, If you do an inference with a Transcriber object t.transcribe(..), after returning the result, it should release any resources related to that inference. But it stays on VRAM and after a few calls to t.transcribe(), I get CUDA related out of memory errors. Looking at nvidia-smi it shows the memory is still occupied even after the transcript has been returned.

It would be nice to have a long lived Transcriber object which can be reused avoiding the lengthy creation time. If you're busy please give me a hint on how it may be done so I can give it a shot and submit a PR. Thanks for your project.

HI I have the same problem, have you slove it?

EdenJin20171503024 avatar May 27 '22 06:05 EdenJin20171503024