notebooks icon indicating copy to clipboard operation
notebooks copied to clipboard

Issue with Merging LoRA in Qwen 2.5 (3B) GRPO

Open raising-heart opened this issue 10 months ago • 5 comments

Hi,

I tested Qwen 2.5 (3B) with GRPO on Kaggle, and after merging using 16-bit, it seems like the LoRA adaptations are not applied properly. The output lacks reasoning compared to the fine-tuned model before merging.

Is there anyone who can help identify what might be going wrong?

Thanks!

raising-heart avatar Feb 15 '25 14:02 raising-heart

Hello, we will check again but while doing that, can you elaborate of how you did the merging process and is it the exact same input? Sometimes people forgot to put the system prompt while inferencing

Erland366 avatar Feb 15 '25 19:02 Erland366

Thanks for checking.

For merging, I used:

model.save_pretrained_merged(new_model_local, tokenizer, save_method="merged_16bit")

After merging, I loaded the model and ran inference with the same input and system prompt as before merging. However, the reasoning part no longer appears in the output.

Let me know if I should test anything specific.

raising-heart avatar Feb 16 '25 07:02 raising-heart

Hey, it's working now. I have it figured it out.

raising-heart avatar Feb 24 '25 09:02 raising-heart

I was going through the same notebook yesterday. Did you figure out what was causing it @raising-heart ? I tried it before merging but did not have a look at it after merging the model

sarthak247 avatar Feb 24 '25 23:02 sarthak247

I was going through the same notebook yesterday. Did you figure out what was causing it @raising-heart ? I tried it before merging but did not have a look at it after merging the model

What issue?

During the conversion/quantization set one that you selected from "False" to "True".

Try set the Prompt Format on LM studio like below:

"""Respond in the following format: <reasoning> ... </reasoning> <answer> ... </answer> """

raising-heart avatar Feb 25 '25 15:02 raising-heart