tutorials
tutorials copied to clipboard
[BUG] - Loss calculation problem in train-loop
Add Link
https://pytorch.org/tutorials/beginner/basics/optimization_tutorial.html
Describe the bug
The loss calculation seems to be wrong. Should we divide the total loss by the number of values of X rather than the number of batches?
The optimiser also seems wrong.
Describe your environment
Running on Google Colab
cc @subramen @albanD
Hi @jaggernaut007, are you referring to this line? If that's the case, the loss within the testing loop gives you a single scalar value per batch, and that's what you are accumulating in test_loss
. That's why you need to divide after the loop by the number of batches, num_batches
.
@svekars unless I am wrong, I think we can close this issue.