xtuner
xtuner copied to clipboard
shape mismatch when loading llava-phi path
I try to load pretrained pth of llava: hub/llava-phi-3-mini-pth/model.pth. And I got this strange error:
- used deepspeed zerospeed3 and flash-attn.
RuntimeError: Error(s) in loading state_dict for LLaVAModel:
size mismatch for llm.model.embed_tokens.weight: copying a param with shape torch.Size([32064, 3072]) from checkpoint, the shape in current model is torch.Size([0]).
size mismatch for llm.model.layers.0.self_attn.o_proj.weight: copying a param with shape torch.Size([3072, 3072]) from checkpoint, the shape in current model is torch.Size([0]).
any clue? thx!
The issue might be due to the local model not being initialized correctly.
Before loading the checkpoint, check if the model contains the key llm.model.layers.0.self_attn.o_proj.weight
.
@shockjiang Can you try this in https://github.com/InternLM/xtuner/blob/main/xtuner/configs/deepspeed/deepspeed_zero3.json ?
{"zero_optimization": {
"stage3_prefetch_bucket_size":0}
}