MiniGPT-4 icon indicating copy to clipboard operation
MiniGPT-4 copied to clipboard

`load_in_8bit_fp32_cpu_offload=True

Open thibaudart opened this issue 1 year ago • 4 comments

Any idea how to solve this:

Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit the quantized model. If you want to dispatch the model on the CPU or the disk while keeping these modules in 32-bit, you need to set load_in_8bit_fp32_cpu_offload=True and pass a custom device_map to from_pretrained. Check https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu for more details.

I have 48gb of vram the GPU RAM must be enough!

thibaudart avatar Apr 18 '23 16:04 thibaudart

48GPU ram should be enough for the demo without the 8bit. Can you set the low_resource to False in eval_configs/minigpt4_eval.yaml and check whether you still have this issue?

TsuTikgiau avatar Apr 18 '23 19:04 TsuTikgiau

I have followed the code given in the huggingface docs:

device_map = {
    "transformer.word_embeddings": 0,
    "transformer.word_embeddings_layernorm": 0,
    "lm_head": "cpu",
    "transformer.h": 0,
    "transformer.ln_f": 0,
}

quantization_config = BitsAndBytesConfig(llm_int8_enable_fp32_cpu_offload=True)

model = AutoModelForCausalLM.from_pretrained("AlekseyKorshuk/vicuna-7b",device_map='auto', quantization_config=quantization_config)
tokenizer = AutoTokenizer.from_pretrained("AlekseyKorshuk/vicuna-7b")

Getting this error

TypeError: __init__() got an unexpected keyword argument 
'load_in_8bit_fp32_cpu_offload'

vrunm avatar May 02 '23 05:05 vrunm

try this:

model = AutoModelForCausalLM.from_pretrained("AlekseyKorshuk/vicuna-7b",device_map=device_map, quantization_config=quantization_config)

ryzn0518 avatar May 29 '23 03:05 ryzn0518

i solve that error like this you can do it same for your model

Load model and tokenizer

quantization_config = BitsAndBytesConfig(load_in_8bit_fp32_cpu_offload=True)

model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1", quantization_config=quantization_config) model = PeftModel.from_pretrained(model, "mirajbhandari/mistral-7b-chat-finetune", device_map="auto")

tokenizer = AutoTokenizer.from_pretrained("mirajbhandari/mistral-7b-chat-finetune")

mirajdeepbhandari avatar Mar 10 '24 16:03 mirajdeepbhandari