zhudongwork
zhudongwork
> I've encountered the same issue. Full-parameter fine-tuning works perfectly, but LoRA fine-tuning produces garbled results. I'm attempting to migrate my project from the zjysteven/lmms-finetune repository, where LoRA fine-tuning works...
> > In addition, have you checked the trainable params when do lora fine tune? I found that if your print out all trainable params at this line [llava/train/train.py#L1695](https://github.com/LLaVA-VL/LLaVA-NeXT/blob/333d6fc705f8b62325c61fda70a629cdfcf54129/llava/train/train.py#L1695), just...
> > In addition, have you checked the trainable params when do lora fine tune? I found that if your print out all trainable params at this line [llava/train/train.py#L1695](https://github.com/LLaVA-VL/LLaVA-NeXT/blob/333d6fc705f8b62325c61fda70a629cdfcf54129/llava/train/train.py#L1695), just...
> lxr_load_llava_next_ov Good job! I carefully reviewed the `load_pretrained_model` function and noticed that when the `model_name` includes "lora", there is a lack of processing logic for the Qwen series of...
elif "qwen" in model_name.lower(): from llava.model.language_model.llava_qwen import LlavaQwenConfig if overwrite_config is not None: llava_cfg = LlavaQwenConfig.from_pretrained(model_path) rank0_print(f"Overwriting config with {overwrite_config}") for k, v in overwrite_config.items(): setattr(llava_cfg, k, v) model =...
> ``` > elif "qwen" in model_name.lower(): > from llava.model.language_model.llava_qwen import LlavaQwenConfig > if overwrite_config is not None: > llava_cfg = LlavaQwenConfig.from_pretrained(model_path) > rank0_print(f"Overwriting config with {overwrite_config}") > for k,...