InternVL
InternVL copied to clipboard
Large 34B model OOM in evaluation
This line allocates the whole model onto GPU:0 again, should have used device_map="auto" when loading.
https://github.com/OpenGVLab/InternVL/blob/2577068ba16fb3c17901fb3479a48b580c99c00b/internvl_chat/eval/mmmu/evaluate_mmmu.py#L286
Thanks for your feedback.