LLaVA icon indicating copy to clipboard operation
LLaVA copied to clipboard

[Question] How to use multi GPUs with LlavaLlamaForCausalLM.from_pretrained

Open JesseZZZZZ opened this issue 10 months ago • 0 comments

Hi! Thank you for your great work, and I want to realize it on my own server. However, I currently only have 8 24G-RTX 4090, and running the inference process on one 4090 will cause OOM error. After viewing previous issues, I see that there are some methods to use multi gpus, but I failed to use them in my own case. LlavaLlamaForCausalLM.from_pretrained does not supported the parameter device_map, so I'm a bit confused. Thanks!

JesseZZZZZ avatar Apr 26 '24 03:04 JesseZZZZZ