Load LoRA in cli_demo.py doesn't work
System Info / 系統信息
I used the train_ddp_i2v.sh script to finetune LoRA, then loaded the saved LoRA weights with the --load_lora argument in cli_demo.py. However, the outputs look exactly the same as those produced without LoRA, despite seeing correct results during validation in the training process. It appears that the --load_lora option isn’t functioning as intended.
Information / 问题信息
- [x] The official example scripts / 官方的示例脚本
- [ ] My own modified scripts / 我自己修改的脚本和任务
Reproduction / 复现过程
- use train_ddp_i2v.sh script to finetune LoRA
- load LoRA via cli_demo.py
Expected behavior / 期待表现
The results should be the same as the results during validation in the training process.
Thought I was going insane when I saw such a difference between wandb and locally! Can confirm that this is a bug for me aswell.
We are reviewing this pull request and will respond soon.
Hi, were you able to fully reproduce the validation results with the fix? Even after applying the scale correctly, there are still some difference...
这个问题解决了吗