memcnn icon indicating copy to clipboard operation
memcnn copied to clipboard

What setting the gradients manually is needed?

Open lighShanghaitech opened this issue 3 years ago • 1 comments

        # Setting the gradients manually on the inputs and outputs (mimic backwards)
        for element, element_grad in zip(inputs, gradients[:ctx.num_inputs]):
            element.grad = element_grad

        for element, element_grad in zip(outputs, grad_outputs):
            element.grad = element_grad

I wonder why this snippet is needed. It seems once the backward function returns the gradients. Pytorch will take care of it. I got a warming

UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations.

However, even I removed this code snippet. It seems to work fine. So I wonder is this necessary?

lighShanghaitech avatar Jan 18 '21 17:01 lighShanghaitech

@lighShanghaitech There is a use-case in my tests that required me to do so, and I am pretty sure some tests fail if I remove those lines. But I'll have to look it up another day. But I agree that it looks redundant for most use-cases, thanks for pointing it out.

silvandeleemput avatar Jan 27 '21 21:01 silvandeleemput