While training GRPO, I noticed that my model crashes. Its loss is 0, its grad_norm and kl are both Nan, and it completes as “!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!””
我在用GRPO的时候出现了下图的情况,这个情况发生的很随机,会在training过程中的任何一个步骤中出现
Did you solve the issue?
I turned down my learning rate and then changed the initial REWARD from 0 to 1e-4 (a very small value), after which it didn't happen. Tried and true from other issues as well. https://github.com/volcengine/verl/issues/747
In the meantime, I'm hoping that someone with more authority can help come up with a more generalized method and answer the question of why the error occurs
I don't understand how did you change initial reward from 0 to 1e-4.. Can you give more context with this?
In addition, can you provide which versions of libraries you are currently using?
cc. @Jintao-Huang Can you tag some people that can help with this issue?
I don't understand how did you change initial reward from 0 to 1e-4.. Can you give more context with this?
In addition, can you provide which versions of libraries you are currently using?
For example, if the model doesn't output the answer that can get the reward, the reward will be computed as zero. I just change it into 1e-4
what's the version of swift? I believe the issue with NaN gradients has been fixed.
https://github.com/modelscope/ms-swift/blob/main/swift/trainers/mixin.py#L264-L281
My swfit version is 3.3.0 dev0, same problem, the grad clip seems not work. I use deepspeed zero2.
My swfit version is 3.3.0 dev0, same problem, the grad clip seems not work. I use deepspeed zero2.
My swfit version is the same version as yours. I also use deepspeed zero2. And I find that my problem can't be solved by using the method mentioned above. It must be something wrong, but it comes casually. I can't be stable to re-produce this problem again.
I also find that it occurs in the first 2k steps (most of them are in 1k steps). And after 2k steps, I never meet it.
我把我的temperature设为0,一开始就出现!!!!!!
温度设为其他就没事了
Feel free to reopen if you have any issues.