llama icon indicating copy to clipboard operation
llama copied to clipboard

70B Model is Using 200gb of VRAM

Open RJain12 opened this issue 1 year ago • 3 comments

I am having trouble running inference on the 70b model as it is using additional CPU memory, possibly creating a bottleneck in performance. It is unable to load all 70b weights onto 8 V100 GPUs. How can I make sure it is only running on the GPU / is there any way to reduce the memory usage so that I can comfortably run inference on the 8 GPUs? It goes extremely slow because the last layers (below) are running on CPU.

I am using the following code:

from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
import os

model = "meta-llama/Llama-2-70b-hf"

tokenizer = AutoTokenizer.from_pretrained(model, cache_dir = "/work/data/hf_tokenizers", trust_remote_code = True)
model = AutoModelForCausalLM.from_pretrained(model, cache_dir = "/work/data/hf_models", trust_remote_code = True, device_map = "auto")

When I query model.hf_device_map:

{'model.embed_tokens': 0,
 'model.layers.0': 0,
 'model.layers.1': 0,
 'model.layers.2': 0,
 'model.layers.3': 0,
 'model.layers.4': 0,
 'model.layers.5': 0,
 'model.layers.6': 0,
 'model.layers.7': 0,
 'model.layers.8': 1,
 'model.layers.9': 1,
 'model.layers.10': 1,
 'model.layers.11': 1,
 'model.layers.12': 1,
 'model.layers.13': 1,
 'model.layers.14': 1,
 'model.layers.15': 1,
 'model.layers.16': 1,
 'model.layers.17': 2,
 'model.layers.18': 2,
 'model.layers.19': 2,
 'model.layers.20': 2,
 'model.layers.21': 2,
 'model.layers.22': 2,
 'model.layers.23': 2,
 'model.layers.24': 2,
 'model.layers.25': 2,
 'model.layers.26': 3,
 'model.layers.27': 3,
 'model.layers.28': 3,
 'model.layers.29': 3,
 'model.layers.30': 3,
 'model.layers.31': 3,
 'model.layers.32': 3,
 'model.layers.33': 3,
 'model.layers.34': 3,
 'model.layers.35': 4,
 'model.layers.36': 4,
 'model.layers.37': 4,
 'model.layers.38': 4,
 'model.layers.39': 4,
 'model.layers.40': 4,
 'model.layers.41': 4,
 'model.layers.42': 4,
 'model.layers.43': 4,
 'model.layers.44': 5,
 'model.layers.45': 5,
 'model.layers.46': 5,
 'model.layers.47': 5,
 'model.layers.48': 5,
 'model.layers.49': 5,
 'model.layers.50': 5,
 'model.layers.51': 5,
 'model.layers.52': 5,
 'model.layers.53': 6,
 'model.layers.54': 6,
 'model.layers.55': 6,
 'model.layers.56': 6,
 'model.layers.57': 6,
 'model.layers.58': 6,
 'model.layers.59': 6,
 'model.layers.60': 6,
 'model.layers.61': 6,
 'model.layers.62': 7,
 'model.layers.63': 7,
 'model.layers.64': 7,
 'model.layers.65': 7,
 'model.layers.66': 7,
 'model.layers.67': 7,
 'model.layers.68': 7,
 'model.layers.69': 7,
 'model.layers.70': 7,
 'model.layers.71': 'cpu',
 'model.layers.72': 'cpu',
 'model.layers.73': 'cpu',
 'model.layers.74': 'cpu',
 'model.layers.75': 'cpu',
 'model.layers.76': 'cpu',
 'model.layers.77': 'cpu',
 'model.layers.78': 'cpu',
 'model.layers.79': 'cpu',
 'model.norm': 'cpu',
 'lm_head': 'cpu'}

RJain12 avatar Jul 21 '23 20:07 RJain12

Hello, I was able to run it on 4 V100, using this code. It almost fitted on 4 GPUs without load_in_8bit, but still needed more memory. I recommend firstly to try torch_dtype parameter. If it doesn't help, try load_in_8bit.

import torch

access_token = "your token"

# Load model directly
from transformers import AutoTokenizer, LlamaForCausalLM

tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-70b-chat-hf", use_auth_token=access_token)
model = LlamaForCausalLM.from_pretrained("meta-llama/Llama-2-70b-chat-hf", 
                                         use_auth_token=access_token,
                                         load_in_8bit=True,
                                         torch_dtype=torch.float16,
                                         device_map="auto",
                                         low_cpu_mem_usage=True)```

25icecreamflavors avatar Jul 29 '23 19:07 25icecreamflavors

Using 4 V100, and using the same bit of code by @25icecreamflavors, Getting the following error,

Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit the quantized model. If you want to dispatch the model on the CPU or the disk while keeping these modules in 32-bit, you need to set load_in_8bit_fp32_cpu_offload=True and pass a custom device_map to from_pretrained. Check https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu for more details.

Any Suggestions?

ranjanshivaji avatar Jul 30 '23 10:07 ranjanshivaji

Using 4 V100, and using the same bit of code by @25icecreamflavors, Getting the following error,

Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit the quantized model. If you want to dispatch the model on the CPU or the disk while keeping these modules in 32-bit, you need to set load_in_8bit_fp32_cpu_offload=True and pass a custom device_map to from_pretrained. Check https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu for more details.

Any Suggestions?

Same here... I'm loading Llama-2-70b-hf with load_in_8bit=True and getting the same error. It is weird because loading llama-65b with the exact same arguments goes smoothly.

zjysteven avatar Mar 24 '24 16:03 zjysteven