LLaVA
LLaVA copied to clipboard
[Question] How to use multi GPUs with LlavaLlamaForCausalLM.from_pretrained
Hi! Thank you for your great work, and I want to realize it on my own server. However, I currently only have 8 24G-RTX 4090, and running the inference process on one 4090 will cause OOM error. After viewing previous issues, I see that there are some methods to use multi gpus, but I failed to use them in my own case. LlavaLlamaForCausalLM.from_pretrained
does not supported the parameter device_map
, so I'm a bit confused. Thanks!