instruct-eval icon indicating copy to clipboard operation
instruct-eval copied to clipboard

Evaluate on a single 24GB/32GB GPU

Open lemyx opened this issue 1 year ago • 1 comments

Hi, on a single 4090 GPU with 24GB memory, the following command will cause out-of-memory.

python main.py mmlu --model_name llama --model_path huggyllama/llama-7b

After that, I try executing the command on A100-40GB, the nvidia-smi result is image

It seems that neither 4090/3090 with 24GB memory or V100 with 32GB memory cannot test Llama-7B on mmlu under above command.

So how to evaluate llama-7b on mmlu on 24GB or 32GB GPU? any more options to enable?

Thanks

lemyx avatar Jan 16 '24 23:01 lemyx

It seems that the CUDA memory will increase during execution of the script

image

Maybe related to maximum sequence length image

Finally, the inference can be finished on a single A100-40GB card image

lemyx avatar Jan 17 '24 00:01 lemyx