Yu-won Lee
Yu-won Lee
I've made a code for finetuning Qwen2.5-VL You could use this https://github.com/2U1/Qwen2-VL-Finetune
Yes 1. For finetuninig only vision module, you could set the `freeze_llm` to true in the `finetune.sh` 2. Also, for the LoRA, you could add the llms layer in the...
@strawhatboy `No label_names provided for model class `PeftModel`. Since `PeftModel` hides base models input arguments, if label_names is not given, label_names can't be set automatically within `Trainer`. Note that empty...
@strawhatboy I've tested but it works fine for me. I'm not sure what is the reason. I'll test a bit more.
-9 usually occurs when system memory oom. I think erasing the `model = model.to(training_args.device)` would be better.
How is it different?
That's a bit odd. If you are inferncing with the other version of transformers, it may be the issue. I'll update the code for the latest one soon. Also will...
@black-tea01 I couldn't find the issue for in consistant answering. Sorry for the late response.
@wangzhiyuan-pixel https://github.com/deepspeedai/DeepSpeed/issues/5659#issuecomment-2234427361 maybe this would work.
It's not an official one but, if you want some easy use of a finetuning code you could use this. https://github.com/2U1/Qwen2-VL-Finetune Most of case the code would work, but in...