unsloth
unsloth copied to clipboard
How much impact does the system prompt have on the output results?
Hi, I've tried your GRPO Reasoning trainer with Qwen2.5 3B Instruct:
I followed your inference with/without lora, And it seemed like an 'Aha moment' after training for just 250 steps with GRPO trainer. BUT, I noticed that there's a difference between the base model and the reasoning model.
In reasoning model, we add this system prompt, and in base model, we don't have such constraints.
So I' tried to ablate the system prompt, to see whether it influences the model. The results suggests that the base model will also try to "reason" even if without RL training. So I'm not sure if "Aha moment" really appears in this experiment.
The base model:
wo system_prompt (Wrong):
'There are no letters 'r' in the word "strawberry."'
with system_prompt(Correct):
'
The reasoning model:
wo system_prompt (Wrong):
'There are two 'r's in the word "strawberry."'
with system_prompt(Correct):
'
It seemed that the System Prompt is the key to reasoning and could lead to correct answer, rather than the GRPO training. But there's one thing I might need to try, I only trained RL for 250 steps. I' ll try to train for 2500 steps to see if the real 'Aha moment' comes out.
So the system prompt is important yes, but yes the number of steps is way too less - you probably need 500 to 2000