MiniCPM-V
MiniCPM-V copied to clipboard
Finetuning feature added for setting `vision_lr` and `resampler_lr`
feat: added fuction for creating each other optimizers to set vision_lr and resampler_lr
fix: resolve issue where saving was not functioning correctly
chore: update args with hyperparameters for improved fine-tuning performance
Can you make this PR merged? This would be helpful for pepole who wants to finetune the model, following some other papers like LLaVA-Next. By my experience, setting the learning rates converges bit faster and showed better performance on my task.