transformers-bloom-inference icon indicating copy to clipboard operation
transformers-bloom-inference copied to clipboard

Cannot explain recurring OOM error

Open Remorax opened this issue 2 years ago • 6 comments

Hi there,

I am trying to use the int8 quantized model of BLOOM 175B for inference and am closely following the bloom-accelerate-inference.py script. I have about 1000 prompts for which I need the outputs. I use beam size of 1 (greedy search) and batch size of 1 since I can't fit more into my GPU memory (I have 4 * 80 GB A100 GPUs). max_new_tokens is set to 64.

When running inference on this list of prompts, after successfully generating on the first few sentences (61 in this case), my script crashes with an OOM error:

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 14.00 MiB (GPU 0; 79.17 GiB total capacity; 77.63 GiB already allocated; 11.31 MiB free; 77.92 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Though long prompts often cause OOM, in this case, I do not think it is due to the length of the current prompt. I logged just to make sure, but prompts longer than the current one have been successfully generated in the past (in the first 61 prompts I was referring to).

I am unable to figure out what the possible reason could be. Any suggestions/ideas?

Remorax avatar Mar 17 '23 11:03 Remorax

Can you provide a bit more details? How have you launched the job? Is this a standalone job or a server deployment using the Makefile?

mayank31398 avatar Mar 19 '23 12:03 mayank31398

Hello, thank you so much for responding. I launch it as a standalone job like this:

CUDA_VISIBLE_DEVICES=0,1,2,3 python ${preprocessing_dir}/query_bloom.py \
    --name bigscience/bloom --dtype int8 \
    --batch_size 1 --num-beams 1 --early-stopping \
    --prompts_file ${results_dir}/prompts.pkl \
    --hypo_file ${results_dir}/hypo.txt

prompts.pkl was created by a previous preprocessing script that works as expected. The only potential issue I could think of here was that it generates "too large" prompts but as explained earlier, prompt length does not appear to be the cause of this error as longer prompts have worked (unless there is a memory leak).

I have uploaded query_bloom.py as a gist over here. It is based off of the bloom-accelerate-inference.py script and is a wrapper on top of it.

Let me know if this suffices!

Remorax avatar Mar 20 '23 00:03 Remorax

May be it is because it was trying to generate too much tokens? According to the content of different prompts, it will generate different number of new tokens.

richarddwang avatar Mar 22 '23 03:03 richarddwang

could be

mayank31398 avatar Mar 22 '23 05:03 mayank31398

Hi @richarddwang , yes but I do set max_new_tokens to be 64 (L20 in the gist). So this does not seem to be the issue

Remorax avatar Mar 22 '23 08:03 Remorax

could be due to large number of input tokens

mayank31398 avatar Mar 29 '23 16:03 mayank31398