Richard Zou
Richard Zou
Np, let's land your PR first and then figure out what is going on with the tests
Thanks for the bug report and the repro script. I was able to reproduce this and extracted a smaller repro. Something is wrong from our side, we'll look into it...
I have root-caused this to https://github.com/pytorch/pytorch/issues/81111
@cyyever this has been fixed in PyTorch and will be in the next release. If you want to use it earlier, please try a PyTorch nightly (and build functorch from...
Closing because this has been resolved
A workaround is to use `jvp` -- jvp will re-evaluate the function you pass to it every time, so the performance won't be as good.
What’s going on in grad(grad(foo))(x): ``` foo(GradTensor(GradTensor(x))) MyReLU.apply(GradTensor(GradTensor(x))) > y = GradTensor(GradTensor(x)).clamp(min=0) > > GradTensor(x).clamp(min=0) > > > x.clamp(min=0) y.grad_fn = MyRelu ``` The inner GradTensor doesn’t see the autograd.Function...
Here's a (public) writeup of what the problem is: https://docs.google.com/document/d/1sPRJyP_vkZEY3RbNBcy2hLmXxnvZlV7cB6_-uxyVMAE/edit?usp=sharing
> Hi @zou3519, I was just wondering if there's been any update on adding custom autograd Function support to FuncTorch? Thank you! :) We're still thinking about how it would...
@rejuvyesh the title was a bit misleading, it turns out autograd.Function can have silently incorrect behavior on all of our transforms. > Are there any examples for how one can...