[BUG] <title> merge_and_unload完之后和原来模型参数完全一致,没有变化
是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
- [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions
该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?
- [X] 我已经搜索过FAQ | I have searched FAQ
当前行为 | Current Behavior
import torch import transformers from transformers import AutoTokenizer, AutoModel from peft import LoraConfig, get_peft_config, get_peft_model, PeftModel
base_model = AutoModel.from_pretrained( f'xxx/MiniCPM-V-2_6', torch_dtype=torch.bfloat16, device_map={"": "cuda"}, trust_remote_code=True, )
lora_model = PeftModel.from_pretrained( base_model, f'xxx/minicpm_v0_ckpts/checkpoint-1000', device_map={"": "cuda"}, trust_remote_code=True, torch_dtype=torch.bfloat16, )
model = lora_model.merge_and_unload()
lora_model.train(False)
tokenizer = AutoTokenizer.from_pretrained('xxx/MiniCPM-V-2_6',trust_remote_code=True,) model.save_pretrained(f"xxx/merged_checkpoint-1000") tokenizer.save_pretrained(f"xxx/merged_checkpoint-1000")
merge_and_unload完之后和原来模型参数完全一致,没有变化
期望行为 | Expected Behavior
No response
复现方法 | Steps To Reproduce
No response
运行环境 | Environment
- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):
备注 | Anything else?
No response
+1,I met the same issue.
try to use our new training code~