InternVL icon indicating copy to clipboard operation
InternVL copied to clipboard

Device errors running the multi-gpu demo on huggingface for InternVL2-76B

Open ThibaultGROUEIX opened this issue 1 year ago • 3 comments

Checklist

  • [X] 1. I have searched related issues but cannot get the expected help.
  • [X] 2. The bug has not been fixed in the latest version.
  • [ ] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.

Describe the bug

I tried running https://huggingface.co/OpenGVLab/InternVL2-Llama3-76B#inference-with-transformers and hit a device error. Some tensors are on the "CPU" and some are on the "GPU". I did not investigate further, sorry, but I thought I would report it.

Reproduction

Run the script in https://huggingface.co/OpenGVLab/InternVL2-Llama3-76B#inference-with-transformers

Environment

I followed the readme.

Error traceback

Some tensors are on the "CPU" and some are on the "GPU".
(My session died, and I did not save the stacktrace unfortunately)

ThibaultGROUEIX avatar Jul 28 '24 00:07 ThibaultGROUEIX

Hello, thank you for your feedback. Could you please provide the code you run? Because following the readme code, I can run this model on multiple GPUs. I can't reproduce this issue.

czczup avatar Jul 30 '24 09:07 czczup

Hello, thank you for your feedback. Could you please provide the code you run? Because following the readme code, I can run this model on multiple GPUs. I can't reproduce this issue.

Hello, I got this same issue, here is my code (directly copy pasted from sample in HF repo):

import numpy as np
import torch
import torchvision.transforms as T
from decord import VideoReader, cpu
from PIL import Image
from torchvision.transforms.functional import InterpolationMode
from transformers import AutoModel, AutoTokenizer

import math
import torch
from transformers import AutoTokenizer, AutoModel

def split_model(model_name):
    device_map = {}
    world_size = torch.cuda.device_count()
    num_layers = {
        'InternVL2-1B': 24, 'InternVL2-2B': 24, 'InternVL2-4B': 32, 'InternVL2-8B': 32,
        'InternVL2-26B': 48, 'InternVL2-40B': 60, 'InternVL2-Llama3-76B': 80}[model_name]
    # Since the first GPU will be used for ViT, treat it as half a GPU.
    num_layers_per_gpu = math.ceil(num_layers / (world_size - 0.5))
    num_layers_per_gpu = [num_layers_per_gpu] * world_size
    num_layers_per_gpu[0] = math.ceil(num_layers_per_gpu[0] * 0.5)
    layer_cnt = 0
    for i, num_layer in enumerate(num_layers_per_gpu):
        for j in range(num_layer):
            device_map[f'language_model.model.layers.{layer_cnt}'] = i
            layer_cnt += 1
    device_map['vision_model'] = 0
    device_map['mlp1'] = 0
    device_map['language_model.model.tok_embeddings'] = 0
    device_map['language_model.model.embed_tokens'] = 0
    device_map['language_model.output'] = 0
    device_map['language_model.model.norm'] = 0
    device_map['language_model.lm_head'] = 0
    device_map[f'language_model.model.layers.{num_layers - 1}'] = 0

    return device_map

path = "OpenGVLab/InternVL2-26B"
device_map = split_model('InternVL2-26B')
model = AutoModel.from_pretrained(
    path,
    torch_dtype=torch.bfloat16,
    low_cpu_mem_usage=True,
    trust_remote_code=True,
    device_map=device_map).eval()

tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)
generation_config = dict(max_new_tokens=1024, do_sample=False)

question = 'Hello, who are you?'
response, history = model.chat(tokenizer, None, question, generation_config, history=None, return_history=True)

full traceback:

Traceback (most recent call last): File "", line 1, in File .../.cache/huggingface/modules/transformers_modules/OpenGVLab/InternVL2-26B/b7d02ef0ba01625f903b5994a448c6fe0d26dd9f/modeling_internvl_chat.py", line 285, in chat generation_output = self.generate( File ".../lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File ".../.cache/huggingface/modules/transformers_modules/OpenGVLab/InternVL2-26B/b7d02ef0ba01625f903b5994a448c6fe0d26dd9f/modeling_internvl_chat.py", line 333, in generate input_embeds = self.language_model.get_input_embeddings()(input_ids) File ".../lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File ".../lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) File ".../lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 164, in forward return F.embedding( File ".../lib/python3.10/site-packages/torch/nn/functional.py", line 2267, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)

seermer avatar Jul 30 '24 17:07 seermer

same issue here

zhangj1an avatar Aug 13 '24 06:08 zhangj1an

Same error while trying the HF demo (https://huggingface.co/OpenGVLab/InternVL2-40B) on 4 GPU server (python3.10, transformers v4.43.2):

device_map
{'language_model.model.layers.0': 0, 'language_model.model.layers.1': 0, 'language_model.model.layers.2': 0, 'language_model.model.layers.3': 0, 'language_model.model.layers.4': 0, 'language_model.model.layers.5': 0, 'language_model.model.layers.6': 0, 'language_model.model.layers.7': 0, 'language_model.model.layers.8': 0, 'language_model.model.layers.9': 1, 'language_model.model.layers.10': 1, 'language_model.model.layers.11': 1, 'language_model.model.layers.12': 1, 'language_model.model.layers.13': 1, 'language_model.model.layers.14': 1, 'language_model.model.layers.15': 1, 'language_model.model.layers.16': 1, 'language_model.model.layers.17': 1, 'language_model.model.layers.18': 1, 'language_model.model.layers.19': 1, 'language_model.model.layers.20': 1, 'language_model.model.layers.21': 1, 'language_model.model.layers.22': 1, 'language_model.model.layers.23': 1, 'language_model.model.layers.24': 1, 'language_model.model.layers.25': 1, 'language_model.model.layers.26': 1, 'language_model.model.layers.27': 2, 'language_model.model.layers.28': 2, 'language_model.model.layers.29': 2, 'language_model.model.layers.30': 2, 'language_model.model.layers.31': 2, 'language_model.model.layers.32': 2, 'language_model.model.layers.33': 2, 'language_model.model.layers.34': 2, 'language_model.model.layers.35': 2, 'language_model.model.layers.36': 2, 'language_model.model.layers.37': 2, 'language_model.model.layers.38': 2, 'language_model.model.layers.39': 2, 'language_model.model.layers.40': 2, 'language_model.model.layers.41': 2, 'language_model.model.layers.42': 2, 'language_model.model.layers.43': 2, 'language_model.model.layers.44': 2, 'language_model.model.layers.45': 3, 'language_model.model.layers.46': 3, 'language_model.model.layers.47': 3, 'language_model.model.layers.48': 3, 'language_model.model.layers.49': 3, 'language_model.model.layers.50': 3, 'language_model.model.layers.51': 3, 'language_model.model.layers.52': 3, 'language_model.model.layers.53': 3, 'language_model.model.layers.54': 3, 'language_model.model.layers.55': 3, 'language_model.model.layers.56': 3, 'language_model.model.layers.57': 3, 'language_model.model.layers.58': 3, 'language_model.model.layers.59': 0, 'language_model.model.layers.60': 3, 'language_model.model.layers.61': 3, 'language_model.model.layers.62': 3, 'vision_model': 0, 'mlp1': 0, 'language_model.model.tok_embeddings': 0, 'language_model.model.embed_tokens': 0, 'language_model.output': 0, 'language_model.model.norm': 0, 'language_model.lm_head': 0}

Load model OpenGVLab/InternVL2-40B ...
Unrecognized keys in `rope_scaling` for 'rope_type'='dynamic': {'type'}
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 17/17 [04:03<00:00, 14.32s/it]
Load tokenizer ...
Load image ...
Run inference ...
Traceback (most recent call last):
  File "run_internV2_inference.py", line 164, in <module>
    response = model.chat(tokenizer, pixel_values, question, generation_config)
  File "[...]/huggingface/modules/transformers_modules/OpenGVLab/InternVL2-40B/b52a031c8dc5c9fc2da55daae3cf1d7062371d13/modeling_internvl_chat.py", line 285, in chat
    generation_output = self.generate(
  File "[...]/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "[...]/huggingface/modules/transformers_modules/OpenGVLab/InternVL2-40B/b52a031c8dc5c9fc2da55daae3cf1d7062371d13/modeling_internvl_chat.py", line 335, in generate
    outputs = self.language_model.generate(
  File "[...]/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "[...]/lib/python3.10/site-packages/transformers/generation/utils.py", line 1989, in generate
    result = self._sample(
  File "[...]/lib/python3.10/site-packages/transformers/generation/utils.py", line 2932, in _sample
    outputs = self(**model_inputs, return_dict=True)
  File "[...]/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "[...]/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "[...]/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1141, in forward
    outputs = self.model(
  File "[...]/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "[...]/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "[...]/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 920, in forward
    position_embeddings = self.rotary_emb(hidden_states, position_ids)
  File "[...]/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "[...]/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "[...]/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "[...]/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 153, in forward
    freqs = (inv_freq_expanded.float() @ position_ids_expanded.float()).transpose(1, 2)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat2 in method wrapper_CUDA_bmm)

simoneriggi avatar Sep 04 '24 17:09 simoneriggi

Hello, I have confirmed that this issue is caused by the newer version of transformers. Downgrading to version 4.37.2 will definitely resolve it, and versions around 4.40 might also work.

czczup avatar Sep 06 '24 13:09 czczup

@czczup I was able to fix this issue using latest version of transformers by adding language_model.model.rotary_emb to the device_map. I am using 2xH100 GPUs. I also had to move a few more LLM layers from GPU1 to GPU0 to avoid CUDA OOM errors.

Final device map I used looks like this:

{
    "language_model.model.layers.0": 0,
    "language_model.model.layers.1": 0,
    "language_model.model.layers.2": 0,
    "language_model.model.layers.3": 0,
    "language_model.model.layers.4": 0,
    "language_model.model.layers.5": 0,
    "language_model.model.layers.6": 0,
    "language_model.model.layers.7": 0,
    "language_model.model.layers.8": 0,
    "language_model.model.layers.9": 0,
    "language_model.model.layers.10": 0,
    "language_model.model.layers.11": 0,
    "language_model.model.layers.12": 0,
    "language_model.model.layers.13": 0,
    "language_model.model.layers.14": 0,
    "language_model.model.layers.15": 0,
    "language_model.model.layers.16": 0,
    "language_model.model.layers.17": 0,
    "language_model.model.layers.18": 0,
    "language_model.model.layers.19": 0,
    "language_model.model.layers.20": 0,
    "language_model.model.layers.21": 0,
    "language_model.model.layers.22": 0,
    "language_model.model.layers.23": 0,
    "language_model.model.layers.24": 0,
    "language_model.model.layers.25": 0,
    "language_model.model.layers.26": 0,
    "language_model.model.layers.27": 0,
    "language_model.model.layers.28": 0,
    "language_model.model.layers.29": 0,
    "language_model.model.layers.30": 0,
    "language_model.model.layers.31": 0,
    "language_model.model.layers.32": 0,
    "language_model.model.layers.33": 1,
    "language_model.model.layers.34": 1,
    "language_model.model.layers.35": 1,
    "language_model.model.layers.36": 1,
    "language_model.model.layers.37": 1,
    "language_model.model.layers.38": 1,
    "language_model.model.layers.39": 1,
    "language_model.model.layers.40": 1,
    "language_model.model.layers.41": 1,
    "language_model.model.layers.42": 1,
    "language_model.model.layers.43": 1,
    "language_model.model.layers.44": 1,
    "language_model.model.layers.45": 1,
    "language_model.model.layers.46": 1,
    "language_model.model.layers.47": 1,
    "language_model.model.layers.48": 1,
    "language_model.model.layers.49": 1,
    "language_model.model.layers.50": 1,
    "language_model.model.layers.51": 1,
    "language_model.model.layers.52": 1,
    "language_model.model.layers.53": 1,
    "language_model.model.layers.54": 1,
    "language_model.model.layers.55": 1,
    "language_model.model.layers.56": 1,
    "language_model.model.layers.57": 1,
    "language_model.model.layers.58": 1,
    "language_model.model.layers.59": 1,
    "language_model.model.layers.60": 1,
    "language_model.model.layers.61": 1,
    "language_model.model.layers.62": 1,
    "language_model.model.layers.63": 1,
    "language_model.model.layers.64": 1,
    "language_model.model.layers.65": 1,
    "language_model.model.layers.66": 1,
    "language_model.model.layers.67": 1,
    "language_model.model.layers.68": 1,
    "language_model.model.layers.69": 1,
    "language_model.model.layers.70": 1,
    "language_model.model.layers.71": 1,
    "language_model.model.layers.72": 1,
    "language_model.model.layers.73": 1,
    "language_model.model.layers.74": 1,
    "language_model.model.layers.75": 1,
    "language_model.model.layers.76": 1,
    "language_model.model.layers.77": 1,
    "language_model.model.layers.78": 1,
    "language_model.model.layers.79": 0,
    "vision_model": 0,
    "mlp1": 0,
    "language_model.model.rotary_emb": 0,
    "language_model.model.embed_tokens": 0,
    "language_model.model.norm": 0,
    "language_model.lm_head": 0
}

rohit-gupta avatar Sep 12 '24 01:09 rohit-gupta

@czczup I was able to fix this issue using latest version of transformers by adding language_model.model.rotary_emb to the device_map. I am using 2xH100 GPUs. I also had to move a few more LLM layers from GPU1 to GPU0 to avoid CUDA OOM errors.

Final device map I used looks like this:

{
    "language_model.model.layers.0": 0,
    "language_model.model.layers.1": 0,
    "language_model.model.layers.2": 0,
    "language_model.model.layers.3": 0,
    "language_model.model.layers.4": 0,
    "language_model.model.layers.5": 0,
    "language_model.model.layers.6": 0,
    "language_model.model.layers.7": 0,
    "language_model.model.layers.8": 0,
    "language_model.model.layers.9": 0,
    "language_model.model.layers.10": 0,
    "language_model.model.layers.11": 0,
    "language_model.model.layers.12": 0,
    "language_model.model.layers.13": 0,
    "language_model.model.layers.14": 0,
    "language_model.model.layers.15": 0,
    "language_model.model.layers.16": 0,
    "language_model.model.layers.17": 0,
    "language_model.model.layers.18": 0,
    "language_model.model.layers.19": 0,
    "language_model.model.layers.20": 0,
    "language_model.model.layers.21": 0,
    "language_model.model.layers.22": 0,
    "language_model.model.layers.23": 0,
    "language_model.model.layers.24": 0,
    "language_model.model.layers.25": 0,
    "language_model.model.layers.26": 0,
    "language_model.model.layers.27": 0,
    "language_model.model.layers.28": 0,
    "language_model.model.layers.29": 0,
    "language_model.model.layers.30": 0,
    "language_model.model.layers.31": 0,
    "language_model.model.layers.32": 0,
    "language_model.model.layers.33": 1,
    "language_model.model.layers.34": 1,
    "language_model.model.layers.35": 1,
    "language_model.model.layers.36": 1,
    "language_model.model.layers.37": 1,
    "language_model.model.layers.38": 1,
    "language_model.model.layers.39": 1,
    "language_model.model.layers.40": 1,
    "language_model.model.layers.41": 1,
    "language_model.model.layers.42": 1,
    "language_model.model.layers.43": 1,
    "language_model.model.layers.44": 1,
    "language_model.model.layers.45": 1,
    "language_model.model.layers.46": 1,
    "language_model.model.layers.47": 1,
    "language_model.model.layers.48": 1,
    "language_model.model.layers.49": 1,
    "language_model.model.layers.50": 1,
    "language_model.model.layers.51": 1,
    "language_model.model.layers.52": 1,
    "language_model.model.layers.53": 1,
    "language_model.model.layers.54": 1,
    "language_model.model.layers.55": 1,
    "language_model.model.layers.56": 1,
    "language_model.model.layers.57": 1,
    "language_model.model.layers.58": 1,
    "language_model.model.layers.59": 1,
    "language_model.model.layers.60": 1,
    "language_model.model.layers.61": 1,
    "language_model.model.layers.62": 1,
    "language_model.model.layers.63": 1,
    "language_model.model.layers.64": 1,
    "language_model.model.layers.65": 1,
    "language_model.model.layers.66": 1,
    "language_model.model.layers.67": 1,
    "language_model.model.layers.68": 1,
    "language_model.model.layers.69": 1,
    "language_model.model.layers.70": 1,
    "language_model.model.layers.71": 1,
    "language_model.model.layers.72": 1,
    "language_model.model.layers.73": 1,
    "language_model.model.layers.74": 1,
    "language_model.model.layers.75": 1,
    "language_model.model.layers.76": 1,
    "language_model.model.layers.77": 1,
    "language_model.model.layers.78": 1,
    "language_model.model.layers.79": 0,
    "vision_model": 0,
    "mlp1": 0,
    "language_model.model.rotary_emb": 0,
    "language_model.model.embed_tokens": 0,
    "language_model.model.norm": 0,
    "language_model.lm_head": 0
}

How did you found the solution?

Strand2013 avatar Sep 12 '24 03:09 Strand2013

@strand2013 the error message comes from the rotary embedding module, I checked if the weights for that module are placed on the right device in the device_map

rohit-gupta avatar Sep 12 '24 07:09 rohit-gupta