fairseq
fairseq copied to clipboard
High Memory consumption on ASR inferencing
❓ Questions and Help
What is your question?
High memory consumption on ASR inferencing on a 16GB RAM server. It used up the system RAM up to 15.67GB and cause the whole server to hang.
What's your environment?
- fairseq Version (e.g., 1.0 or main): main
- PyTorch Version (e.g., 1.0)
- OS (e.g., Linux): Linux
- How you installed fairseq (
pip, source): source - Build command you used (if compiling from source):
cd /path/to/fairseq-py/
python examples/mms/asr/infer/mms_infer.py --model "/path/to/asr/model" --lang lang_code \
--audio "/path/to/audio_1.wav"
- Python version: 3.8
- CUDA/cuDNN version: None
- GPU models and configuration: None
- Any other relevant information: I am using docker.
Hi! @olawale1rty ! Were you able to sort this out?
I'm having possibly a similar memory issue. I'm trying to use AV-Hubert for avsr inference in colab and the kernel dies for unknown reasons at (it seems) task.inference_step. I think it's because of RAM consumption
@nastia-lado No, I have not been able to sort it out yet.