tutorials icon indicating copy to clipboard operation
tutorials copied to clipboard

💡 [REQUEST] - What is purpose of `out.backward(torch.randn(1, 10))` in neural_networks_tutorial

Open Lovkush-A opened this issue 5 months ago • 5 comments

🚀 Describe the improvement or the new tutorial

In neural networks tutorial for beginners, we have the following:

Zero the gradient buffers of all parameters and backprops with random gradients:

net.zero_grad()
out.backward(torch.randn(1, 10))

What is the purpose of this? It is not part of standard ML workflows and can be confusing to beginners. (As evidence,I am helping some people learn basics of ML and I got questions about this line. This is how I found out about it!)

If there is no good reason for it, then I suggest:

  • dropping these few lines
  • changing wording of other parts of the page if needed. E.g. 'at this point we covered... calling backward'

Existing tutorials on this topic

No response

Additional context

No response

cc @subramen @albanD

Lovkush-A avatar Aug 28 '24 14:08 Lovkush-A