Issue with Merging LoRA in Qwen 2.5 (3B) GRPO
Hi,
I tested Qwen 2.5 (3B) with GRPO on Kaggle, and after merging using 16-bit, it seems like the LoRA adaptations are not applied properly. The output lacks reasoning compared to the fine-tuned model before merging.
Is there anyone who can help identify what might be going wrong?
Thanks!
Hello, we will check again but while doing that, can you elaborate of how you did the merging process and is it the exact same input? Sometimes people forgot to put the system prompt while inferencing
Thanks for checking.
For merging, I used:
model.save_pretrained_merged(new_model_local, tokenizer, save_method="merged_16bit")
After merging, I loaded the model and ran inference with the same input and system prompt as before merging. However, the reasoning part no longer appears in the output.
Let me know if I should test anything specific.
Hey, it's working now. I have it figured it out.
I was going through the same notebook yesterday. Did you figure out what was causing it @raising-heart ? I tried it before merging but did not have a look at it after merging the model
I was going through the same notebook yesterday. Did you figure out what was causing it @raising-heart ? I tried it before merging but did not have a look at it after merging the model
What issue?
During the conversion/quantization set one that you selected from "False" to "True".
Try set the Prompt Format on LM studio like below:
"""Respond in the following format: <reasoning> ... </reasoning> <answer> ... </answer> """