benchmark icon indicating copy to clipboard operation
benchmark copied to clipboard

llama_7b model OOM issue

Open jinsong-mao opened this issue 1 year ago • 3 comments

Hi

I duplicate the llama model and rename it into llama_7b, changed the model parameters according to llama_7b specification, looks like this: image

skiped the CPU eager mode, only run the cuda model.

it reports the following issue when running with this command: python userbenchmark/dynamo/dynamobench/torchbench.py -dcuda --float16 -n1 --inductor --performance --inference --filter "llama" --batch_size 1 --in_slen 32 --out_slen 3 --output-dir=torchbench_llama_test_logs image

If I want to run this model, how should I fix it? my hardware is A100-40G

thanks

jinsong-mao avatar Nov 21 '23 08:11 jinsong-mao

We only guarantee the runability of models on PT eager mode on A100 40GB in our CI. It is possible that inductor uses more GPU memory than eager mode, causing OOM. Optimizing GPU memory usage with inductor is an open question. cc @msaroufim

xuzhao9 avatar Nov 21 '23 14:11 xuzhao9

@xuzhao9 I tried to use 4xA100-40G to avoid the OOM issue, looks torchbench.py only use one GPU's memory, I used options like --device-index or --multiprocess, both failed. do you have any advice on multi GPU support?

thanks

jinsong-mao avatar Nov 24 '23 02:11 jinsong-mao