xtuner icon indicating copy to clipboard operation
xtuner copied to clipboard

llava-phi3 模型转换到llava模型报错。求助,怎么解决?

Open awzhgw opened this issue 1 year ago • 1 comments

python Traceback (most recent call last): File "/export/App/training_platform/PinoModel/xtuner/xtuner/configs/llava/phi3_mini_4k_v16/convert_xtuner_weights_to_llava.py", line 99, in main() File "/export/App/training_platform/PinoModel/xtuner/xtuner/configs/llava/phi3_mini_4k_v16/convert_xtuner_weights_to_llava.py", line 94, in main convert_to_llava(args.text_model_id, args.vision_model_id, File "/export/App/training_platform/PinoModel/xtuner/xtuner/configs/llava/phi3_mini_4k_v16/convert_xtuner_weights_to_llava.py", line 80, in convert_to_llava model.load_state_dict(state_dict, strict=True, assign=True) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 2152, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for LlavaLlamaForCausalLM: Missing key(s) in state_dict: "model.image_newline".

awzhgw avatar May 04 '24 10:05 awzhgw

@awzhgw convert_xtuner_weights_to_llava.py 这个脚本,只支持 llm 是 llama 结构的模型

我们马上会更新 phi3 的转换脚本,需要把 phi3 先转为 llama 才行

pppppM avatar May 06 '24 04:05 pppppM