albanD
albanD
This might trigger a couple master-only build failures for sure. But we can adapt if they are too frequent.
Thanks for opening this! Some comments: - TORCH_CHECK and TORCH_WARN already get translated to their python equivalent on the way there. - We can create new flavors of TORCH_WARN similar...
Some experiments have been done to visualize Pytorch graphs in tensorflow using ONNX. The problem is that the graphs usually end up being too big because they print at the...
Moving to vision as the datasets in question are from there.
The possible caveat with that is that some operations require contiguous tensors to work with (BLAS, cudnn) and so extra `.clone()` might be needed in some places. Also, for some...
The `full_backward_hook` do rely on custom Function. So if custom Functions are not supported, then yes that won't work I'm afraid.
By the way, while the `nn.Module`'s `full_backward_hook` will not work, the basic autograd hooks at the `Tensor` level should work. So in particular for your use case, if you just...
> I can register a hook on a Tensor and use my forward_pre_hook and backward_full_hook formulae and that'd return (for all samples) the input, and the backward sensitivity (grad_output) for...
> the gradient of a Tensor rather than the grad_output values that I need. What you mean by that? the `grad_output` is the gradient of the output. So if you...
> I assume I'd have to create a Tensor out for all nn.Module objects in my network and then manually register a hook on those out variables? Then that would...