SMU_pytorch icon indicating copy to clipboard operation
SMU_pytorch copied to clipboard

I use SMU instead of SILU in YoloV5, but loss shows up as nan

Open mzzjuve opened this issue 2 years ago • 2 comments

I use SMU instead of SILU in YoloV5, but loss shows up as nan.

Could you please tell me the possible reason?Or maybe it's normal that this happened in previous epochs?

image

mzzjuve avatar Apr 10 '22 15:04 mzzjuve

@mzzjuve Thanks for the information you shared. I suppose you use consider alpha=0.25 and mu=100000. Instead, I will recommend you try to initialize alpha at 0.01 and mu at 2.0 or 2.5 (use mu as a trainable parameter) for SMU and then run your experiments. From, my experience, these initializations provide better results. Loss should not be nan with these parameter values. Please let me know if you still got nan.

I use SMU instead of SILU in YoloV5, but loss shows up as nan.

Could you please tell me the possible reason?Or maybe it's normal that this happened in previous epochs?

image

koushik313 avatar Apr 10 '22 20:04 koushik313

@mzzjuve Thanks for the information you shared. I suppose you use consider alpha=0.25 and mu=100000. Instead, I will recommend you try to initialize alpha at 0.01 and mu at 2.0 or 2.5 (use mu as a trainable parameter) for SMU and then run your experiments. From, my experience, these initializations provide better results. Loss should not be nan with these parameter values. Please let me know if you still got nan.

I use SMU instead of SILU in YoloV5, but loss shows up as nan. Could you please tell me the possible reason?Or maybe it's normal that this happened in previous epochs? image

Thank you for your timely reply. The problem was solved after I modified the parameters. I'll keep training. Thank you for your excellent work

mzzjuve avatar Apr 11 '22 01:04 mzzjuve