fairseq icon indicating copy to clipboard operation
fairseq copied to clipboard

High Memory consumption on ASR inferencing

Open olawale1rty opened this issue 2 years ago • 2 comments

❓ Questions and Help

What is your question?

High memory consumption on ASR inferencing on a 16GB RAM server. It used up the system RAM up to 15.67GB and cause the whole server to hang.

image (1)

What's your environment?

  • fairseq Version (e.g., 1.0 or main): main
  • PyTorch Version (e.g., 1.0)
  • OS (e.g., Linux): Linux
  • How you installed fairseq (pip, source): source
  • Build command you used (if compiling from source):
cd /path/to/fairseq-py/
python examples/mms/asr/infer/mms_infer.py --model "/path/to/asr/model" --lang lang_code \
 --audio "/path/to/audio_1.wav"
  • Python version: 3.8
  • CUDA/cuDNN version: None
  • GPU models and configuration: None
  • Any other relevant information: I am using docker.

olawale1rty avatar Sep 01 '23 16:09 olawale1rty

Hi! @olawale1rty ! Were you able to sort this out?

I'm having possibly a similar memory issue. I'm trying to use AV-Hubert for avsr inference in colab and the kernel dies for unknown reasons at (it seems) task.inference_step. I think it's because of RAM consumption

nastia-lado avatar Sep 29 '23 16:09 nastia-lado

@nastia-lado No, I have not been able to sort it out yet.

olawale1rty avatar Sep 29 '23 16:09 olawale1rty