[Bug] Failed to merge lora, using tools/merge_lora.py
Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [x] 2. The bug has not been fixed in the latest version.
- [x] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
Describe the bug
cannot merge the lora weight of trained internVL model. 按照教程去做的,https://github.com/OpenGVLab/InternVL/issues/433
Reproduction
python tools/merge_lora.py ./ckpts/InternVL2/internvl2_8b_internlm2_7b_dynamic_res_2nd_finetune_lora_vcg112k/ ckpts/InternVL2/internvl2_8b_internlm2_7b_dynamic_res_2nd_finetune_lora_vcg112k_merge
Environment
Follow the official installation method
Error traceback
Loading model...
trainable params: 37,748,736 || all params: 7,775,531,008 || trainable%: 0.4855
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:01<00:00, 2.24it/s]
Some weights of the model checkpoint at ./ckpts/InternVL2/internvl2_8b_internlm2_7b_dynamic_res_2nd_finetune_lora_VCG_vcg112k/ were not used when initializing InternVLChatModel: ['language_model.base_model.model.model.layers.0.attention.wo.weight', 'language_model.base_model.model.model.layers.0.attention.wqkv.weight', 'language_model.base_model.model.model.layers.0.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.0.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.0.feed_forward.w3.weight', 'language_model.base_model.model.model.layers.1.attention.wo.weight', 'language_model.base_model.model.model.layers.1.attention.wqkv.weight', 'language_model.base_model.model.model.layers.1.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.1.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.1.feed_forward.w3.weight', 'language_model.base_model.model.model.layers.10.attention.wo.weight', 'language_model.base_model.model.model.layers.10.attention.wqkv.weight', 'language_model.base_model.model.model.layers.10.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.10.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.10.feed_forward.w3.weight', 'language_model.base_model.model.model.layers.11.attention.wo.weight', 'language_model.base_model.model.model.layers.11.attention.wqkv.weight', 'language_model.base_model.model.model.layers.11.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.11.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.11.feed_forward.w3.weight', 'language_model.base_model.model.model.layers.12.attention.wo.weight', 'language_model.base_model.model.model.layers.12.attention.wqkv.weight', 'language_model.base_model.model.model.layers.12.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.12.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.12.feed_forward.w3.weight', 'language_model.base_model.model.model.layers.13.attention.wo.weight', 'language_model.base_model.model.model.layers.13.attention.wqkv.weight', 'language_model.base_model.model.model.layers.13.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.13.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.13.feed_forward.w3.weight', 'language_model.base_model.model.model.layers.14.attention.wo.weight', 'language_model.base_model.model.model.layers.14.attention.wqkv.weight', 'language_model.base_model.model.model.layers.14.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.14.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.14.feed_forward.w3.weight', 'language_model.base_model.model.model.layers.15.attention.wo.weight', 'language_model.base_model.model.model.layers.15.attention.wqkv.weight', 'language_model.base_model.model.model.layers.15.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.15.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.15.feed_forward.w3.weight', 'language_model.base_model.model.model.layers.16.attention.wo.weight', 'language_model.base_model.model.model.layers.16.attention.wqkv.weight', 'language_model.base_model.model.model.layers.16.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.16.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.16.feed_forward.w3.weight', 'language_model.base_model.model.model.layers.17.attention.wo.weight', 'language_model.base_model.model.model.layers.17.attention.wqkv.weight', 'language_model.base_model.model.model.layers.17.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.17.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.17.feed_forward.w3.weight', 'language_model.base_model.model.model.layers.18.attention.wo.weight', 'language_model.base_model.model.model.layers.18.attention.wqkv.weight', 'language_model.base_model.model.model.layers.18.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.18.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.18.feed_forward.w3.weight', 'language_model.base_model.model.model.layers.19.attention.wo.weight', 'language_model.base_model.model.model.layers.19.attention.wqkv.weight', 'language_model.base_model.model.model.layers.19.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.19.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.19.feed_forward.w3.weight', 'language_model.base_model.model.model.layers.2.attention.wo.weight', 'language_model.base_model.model.model.layers.2.attention.wqkv.weight', 'language_model.base_model.model.model.layers.2.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.2.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.2.feed_forward.w3.weight', 'language_model.base_model.model.model.layers.20.attention.wo.weight', 'language_model.base_model.model.model.layers.20.attention.wqkv.weight', 'language_model.base_model.model.model.layers.20.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.20.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.20.feed_forward.w3.weight', 'language_model.base_model.model.model.layers.21.attention.wo.weight', 'language_model.base_model.model.model.layers.21.attention.wqkv.weight', 'language_model.base_model.model.model.layers.21.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.21.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.21.feed_forward.w3.weight', 'language_model.base_model.model.model.layers.22.attention.wo.weight', 'language_model.base_model.model.model.layers.22.attention.wqkv.weight', 'language_model.base_model.model.model.layers.22.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.22.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.22.feed_forward.w3.weight', 'language_model.base_model.model.model.layers.23.attention.wo.weight', 'language_model.base_model.model.model.layers.23.attention.wqkv.weight', 'language_model.base_model.model.model.layers.23.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.23.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.23.feed_forward.w3.weight', 'language_model.base_model.model.model.layers.24.attention.wo.weight', 'language_model.base_model.model.model.layers.24.attention.wqkv.weight', 'language_model.base_model.model.model.layers.24.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.24.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.24.feed_forward.w3.weight', 'language_model.base_model.model.model.layers.25.attention.wo.weight', 'language_model.base_model.model.model.layers.25.attention.wqkv.weight', 'language_model.base_model.model.model.layers.25.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.25.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.25.feed_forward.w3.weight', 'language_model.base_model.model.model.layers.26.attention.wo.weight', 'language_model.base_model.model.model.layers.26.attention.wqkv.weight', 'language_model.base_model.model.model.layers.26.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.26.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.26.feed_forward.w3.weight', 'language_model.base_model.model.model.layers.27.attention.wo.weight', 'language_model.base_model.model.model.layers.27.attention.wqkv.weight', 'language_model.base_model.model.model.layers.27.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.27.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.27.feed_forward.w3.weight', 'language_model.base_model.model.model.layers.28.attention.wo.weight', 'language_model.base_model.model.model.layers.28.attention.wqkv.weight', 'language_model.base_model.model.model.layers.28.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.28.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.28.feed_forward.w3.weight', 'language_model.base_model.model.model.layers.29.attention.wo.weight', 'language_model.base_model.model.model.layers.29.attention.wqkv.weight', 'language_model.base_model.model.model.layers.29.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.29.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.29.feed_forward.w3.weight', 'language_model.base_model.model.model.layers.3.attention.wo.weight', 'language_model.base_model.model.model.layers.3.attention.wqkv.weight', 'language_model.base_model.model.model.layers.3.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.3.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.3.feed_forward.w3.weight', 'language_model.base_model.model.model.layers.30.attention.wo.weight', 'language_model.base_model.model.model.layers.30.attention.wqkv.weight', 'language_model.base_model.model.model.layers.30.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.30.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.30.feed_forward.w3.weight', 'language_model.base_model.model.model.layers.31.attention.wo.weight', 'language_model.base_model.model.model.layers.31.attention.wqkv.weight', 'language_model.base_model.model.model.layers.31.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.31.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.31.feed_forward.w3.weight', 'language_model.base_model.model.model.layers.4.attention.wo.weight', 'language_model.base_model.model.model.layers.4.attention.wqkv.weight', 'language_model.base_model.model.model.layers.4.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.4.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.4.feed_forward.w3.weight', 'language_model.base_model.model.model.layers.5.attention.wo.weight', 'language_model.base_model.model.model.layers.5.attention.wqkv.weight', 'language_model.base_model.model.model.layers.5.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.5.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.5.feed_forward.w3.weight', 'language_model.base_model.model.model.layers.6.attention.wo.weight', 'language_model.base_model.model.model.layers.6.attention.wqkv.weight', 'language_model.base_model.model.model.layers.6.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.6.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.6.feed_forward.w3.weight', 'language_model.base_model.model.model.layers.7.attention.wo.weight', 'language_model.base_model.model.model.layers.7.attention.wqkv.weight', 'language_model.base_model.model.model.layers.7.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.7.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.7.feed_forward.w3.weight', 'language_model.base_model.model.model.layers.8.attention.wo.weight', 'language_model.base_model.model.model.layers.8.attention.wqkv.weight', 'language_model.base_model.model.model.layers.8.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.8.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.8.feed_forward.w3.weight', 'language_model.base_model.model.model.layers.9.attention.wo.weight', 'language_model.base_model.model.model.layers.9.attention.wqkv.weight', 'language_model.base_model.model.model.layers.9.feed_forward.w1.weight', 'language_model.base_model.model.model.layers.9.feed_forward.w2.weight', 'language_model.base_model.model.model.layers.9.feed_forward.w3.weight']
- This IS expected if you are initializing InternVLChatModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing InternVLChatModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of InternVLChatModel were not initialized from the model checkpoint at ./ckpts/InternVL2/internvl2_8b_internlm2_7b_dynamic_res_2nd_finetune_lora_VCG_vcg112k/ and are newly initialized: ['language_model.base_model.model.model.layers.0.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.0.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.0.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.0.feed_forward.w3.base_layer.weight', 'language_model.base_model.model.model.layers.1.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.1.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.1.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.1.feed_forward.w3.base_layer.weight', 'language_model.base_model.model.model.layers.10.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.10.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.10.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.10.feed_forward.w3.base_layer.weight', 'language_model.base_model.model.model.layers.11.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.11.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.11.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.11.feed_forward.w3.base_layer.weight', 'language_model.base_model.model.model.layers.12.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.12.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.12.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.12.feed_forward.w3.base_layer.weight', 'language_model.base_model.model.model.layers.13.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.13.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.13.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.13.feed_forward.w3.base_layer.weight', 'language_model.base_model.model.model.layers.14.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.14.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.14.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.14.feed_forward.w3.base_layer.weight', 'language_model.base_model.model.model.layers.15.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.15.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.15.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.15.feed_forward.w3.base_layer.weight', 'language_model.base_model.model.model.layers.16.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.16.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.16.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.16.feed_forward.w3.base_layer.weight', 'language_model.base_model.model.model.layers.17.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.17.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.17.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.17.feed_forward.w3.base_layer.weight', 'language_model.base_model.model.model.layers.18.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.18.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.18.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.18.feed_forward.w3.base_layer.weight', 'language_model.base_model.model.model.layers.19.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.19.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.19.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.19.feed_forward.w3.base_layer.weight', 'language_model.base_model.model.model.layers.2.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.2.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.2.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.2.feed_forward.w3.base_layer.weight', 'language_model.base_model.model.model.layers.20.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.20.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.20.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.20.feed_forward.w3.base_layer.weight', 'language_model.base_model.model.model.layers.21.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.21.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.21.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.21.feed_forward.w3.base_layer.weight', 'language_model.base_model.model.model.layers.22.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.22.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.22.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.22.feed_forward.w3.base_layer.weight', 'language_model.base_model.model.model.layers.23.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.23.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.23.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.23.feed_forward.w3.base_layer.weight', 'language_model.base_model.model.model.layers.24.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.24.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.24.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.24.feed_forward.w3.base_layer.weight', 'language_model.base_model.model.model.layers.25.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.25.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.25.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.25.feed_forward.w3.base_layer.weight', 'language_model.base_model.model.model.layers.26.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.26.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.26.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.26.feed_forward.w3.base_layer.weight', 'language_model.base_model.model.model.layers.27.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.27.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.27.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.27.feed_forward.w3.base_layer.weight', 'language_model.base_model.model.model.layers.28.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.28.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.28.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.28.feed_forward.w3.base_layer.weight', 'language_model.base_model.model.model.layers.29.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.29.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.29.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.29.feed_forward.w3.base_layer.weight', 'language_model.base_model.model.model.layers.3.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.3.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.3.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.3.feed_forward.w3.base_layer.weight', 'language_model.base_model.model.model.layers.30.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.30.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.30.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.30.feed_forward.w3.base_layer.weight', 'language_model.base_model.model.model.layers.31.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.31.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.31.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.31.feed_forward.w3.base_layer.weight', 'language_model.base_model.model.model.layers.4.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.4.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.4.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.4.feed_forward.w3.base_layer.weight', 'language_model.base_model.model.model.layers.5.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.5.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.5.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.5.feed_forward.w3.base_layer.weight', 'language_model.base_model.model.model.layers.6.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.6.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.6.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.6.feed_forward.w3.base_layer.weight', 'language_model.base_model.model.model.layers.7.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.7.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.7.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.7.feed_forward.w3.base_layer.weight', 'language_model.base_model.model.model.layers.8.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.8.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.8.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.8.feed_forward.w3.base_layer.weight', 'language_model.base_model.model.model.layers.9.attention.wqkv.base_layer.weight', 'language_model.base_model.model.model.layers.9.feed_forward.w1.base_layer.weight', 'language_model.base_model.model.model.layers.9.feed_forward.w2.base_layer.weight', 'language_model.base_model.model.model.layers.9.feed_forward.w3.base_layer.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Loading tokenizer...
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Saving model...
Removed shared tensor {'language_model.model.layers.22.attention.wo.weight', 'language_model.model.layers.17.attention.wo.weight', 'language_model.model.layers.2.attention.wo.weight', 'language_model.model.layers.15.attention.wo.weight', 'language_model.model.layers.20.attention.wo.weight', 'language_model.model.layers.6.attention.wo.weight', 'language_model.model.layers.11.attention.wo.weight', 'language_model.model.layers.27.attention.wo.weight', 'language_model.model.layers.10.attention.wo.weight', 'language_model.model.layers.23.attention.wo.weight', 'language_model.model.layers.14.attention.wo.weight', 'language_model.model.layers.26.attention.wo.weight', 'language_model.model.layers.29.attention.wo.weight', 'language_model.model.layers.24.attention.wo.weight', 'language_model.model.layers.3.attention.wo.weight', 'language_model.model.layers.7.attention.wo.weight', 'language_model.model.layers.4.attention.wo.weight', 'language_model.model.layers.9.attention.wo.weight', 'language_model.model.layers.30.attention.wo.weight', 'language_model.model.layers.5.attention.wo.weight', 'language_model.model.layers.21.attention.wo.weight', 'language_model.model.layers.25.attention.wo.weight', 'language_model.model.layers.28.attention.wo.weight', 'language_model.model.layers.12.attention.wo.weight', 'language_model.model.layers.16.attention.wo.weight', 'language_model.model.layers.19.attention.wo.weight', 'language_model.model.layers.18.attention.wo.weight', 'language_model.model.layers.1.attention.wo.weight', 'language_model.model.layers.8.attention.wo.weight', 'language_model.model.layers.31.attention.wo.weight', 'language_model.model.layers.13.attention.wo.weight'} while saving. This should be OK, but check by verifying that you don't receive any warning while reloading
Traceback (most recent call last):
File "InternVL/internvl_chat/tools/merge_lora.py", line 28, in <module>
model.save_pretrained(args.output_path)
File "~/envs/internvl/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2486, in save_pretrained
safe_save_file(shard, os.path.join(save_directory, shard_file), metadata={"format": "pt"})
File "~/envs/internvl/lib/python3.9/site-packages/safetensors/torch.py", line 286, in save_file
serialize_file(_flatten(tensors), filename, metadata=metadata)
File "~/envs/internvl/lib/python3.9/site-packages/safetensors/torch.py", line 496, in _flatten
return {
File "~/envs/internvl/lib/python3.9/site-packages/safetensors/torch.py", line 500, in <dictcomp>
"data": _tobytes(v, k),
File "~/envs/internvl/lib/python3.9/site-packages/safetensors/torch.py", line 422, in _tobytes
tensor = tensor.to("cpu")
NotImplementedError: Cannot copy out of meta tensor; no data!
Hi @czczup , 可以帮忙看下这个issue吗。 用 tools/merge_lora.py 没成功
可以加一段检查代码检查模型中的所有参数是否已经初始化,如果有参数未被初始化,可以试一下将这些张量移到 CPU 或 GPU上来强制初始化,同时low_cpu_mem_usage 你也可以尝试将其设置为 False,看一下能否解决这个问题
可以加一段检查代码检查模型中的所有参数是否已经初始化,如果有参数未被初始化,可以试一下将这些张量移到 CPU 或 GPU上来强制初始化,同时low_cpu_mem_usage 你也可以尝试将其设置为 False,看一下能否解决这个问题
是有些参数没有初始化。怎么强制初始化
可以加一段检查代码检查模型中的所有参数是否已经初始化,如果有参数未被初始化,可以试一下将这些张量移到 CPU 或 GPU上来强制初始化,同时low_cpu_mem_usage 你也可以尝试将其设置为 False,看一下能否解决这个问题
It works for me! Thanks!
我甚至有File "/DATA/workshop/personal/InternVL-main/internvl_chat/tools/merge_lora.py", line 4, in
我甚至有File "/DATA/workshop/personal/InternVL-main/internvl_chat/tools/merge_lora.py", line 4, in from internvl.model.internvl_chat import InternVLChatModel ModuleNotFoundError: No module named 'internvl' 是我路径没导对嘛
可以试一下先执行export PYTHONPATH="${PYTHONPATH}:/DATA/workshop/personal/InternVL-main/internvl_chat/"