LLaVA-NeXT icon indicating copy to clipboard operation
LLaVA-NeXT copied to clipboard

Load checkpoint after finetuning

Open huynhbaobk opened this issue 1 year ago • 0 comments

After finetuning and saving checkpoint i could not load model

pretrained = "/workspace/checkpoints/checkpoint-1006"
model_name = "llava_qwen"
device = "cuda"
device_map = "auto"
tokenizer, model, image_processor, max_length = load_pretrained_model(pretrained, None, model_name, device_map=device_map)  # Add any other thing you want to pass in llava_model_args

model.eval()

`--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[82], line 10 8 device = "cuda" 9 device_map = "auto" ---> 10 tokenizer, model, image_processor, max_length = load_pretrained_model(pretrained, None, model_name, device_map=device_map) # Add any other thing you want to pass in llava_model_args 12 model.eval()

File /opt/conda/lib/python3.10/site-packages/llava/model/builder.py:283, in load_pretrained_model(model_path, model_base, model_name, load_8bit, load_4bit, device_map, attn_implementation, customized_config, overwrite_config, **kwargs) 281 if mm_use_im_start_end: 282 tokenizer.add_tokens([DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN], special_tokens=True) --> 283 model.resize_token_embeddings(len(tokenizer)) 285 vision_tower = model.get_vision_tower() 286 if not vision_tower.is_loaded:

File /opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py:2029, in PreTrainedModel.resize_token_embeddings(self, new_num_tokens, pad_to_multiple_of) 2027 # Update base model and current model config 2028 if hasattr(self.config, "text_config"): -> 2029 self.config.text_config.vocab_size = vocab_size 2030 else: 2031 self.config.vocab_size = vocab_size

AttributeError: 'dict' object has no attribute 'vocab_size' 1`

huynhbaobk avatar Aug 28 '24 07:08 huynhbaobk