MIC
MIC copied to clipboard
Varying Results across Trainings
Dear author,
Thank you so much for such a great job!
I have encountered a problem in my experiment and hope to get your guidance.
In the experiment, we found that even with the exact same code, the final results are not exactly the same, and even there will be differences during the training process. (For example, I re-run the same code) Why?
Dear @kaigelee,
The results will vary even with the same code and seed as pytorch has multiple sources of nondeterministic behavior that cannot be completely eliminated (e.g. differentiating bilinear upsampling) and can result different outcomes of the training. You can read more about it here: https://pytorch.org/docs/stable/notes/randomness.html.
Best, Lukas
Dear @kaigelee,
The results will vary even with the same code and seed as pytorch has multiple sources of nondeterministic behavior that cannot be completely eliminated (e.g. differentiating bilinear upsampling) and can result different outcomes of the training. You can read more about it here: https://pytorch.org/docs/stable/notes/randomness.html.
Best, Lukas
Thanks for such a timely answer. I also want to ask you how to ensure the stability (reproducibility) of the experiment as much as possible? Especially when doing ablation experiments.
To ensure meaningful ablations considering the influence of randomness, we reported the mean over three training runs.
Dear Lukas, is it normal for the random masks generated in each iteration to be different when using the same code and seed? Have you observed this phenomenon? I think that's why the same code actually has different performance. May I know how to fix it? Looking forward to your reply, thank you