Richard Zou
Richard Zou
> Is model.parameters() guaranteed to return parameters in the same order of make_functional? Yes > If this is the case then I can surely do this, however I would like...
@trenta3 out of curiosity, what are you using `make_functional` for? Are you using any of the other functorch APIs?
@kxhit thank you for your feedback. Could you give a little more context about why you want to update each original model's state_dict?
How does this work? Is there special logic in forward-mode AD that handles `support_as_strided`?
@albanD do you have a sense of how much overhead this adds? Making this return `false` for BatchedTensor doesn't actually work because BatchedTensor isn't directly involved in autograd -- autograd...
Thanks Alban. A few hundred nanoseconds is not that bad
Had some more offline discussion with Alban. It's important to note that: - people do use vmap over torch.autograd.grad (and likely will attempt to vmap over the dual tensor API)...
This is the same issue as #292 I think. I'm planning to put up a fix for that later today
Okay, I put up a fix for the specific error you saw. There's another issue though, which is that the implementation of slogdet backward calls tensor.item() and we are unable...
autograd.Function is silently incorrect when used with functorch (https://github.com/pytorch/functorch/issues/207). So even if you manually define the backward, functorch just ignores it right now :( That issue is very tricky to...