functorch icon indicating copy to clipboard operation
functorch copied to clipboard

functorch is JAX-like composable function transforms for PyTorch.

Results 162 functorch issues
Sort by recently updated
recently updated
newest added

When running pytest test/functorch/test_vmap.py, the error > ModuleNotFoundError: No module named 'torch.testing._internal.autograd_function_db' comes up. This issue is only prevalent with test_vmap.py, all the other unit tests in functorch works just...

In `functorch` test suite, we use `sample_inputs` to get samples from an OpInfo. The problem is that `sample_inputs` may or may not cover all the case/overloads for an operator. I...

I have a use-case for `functorch`. I would like to check possible iterations of model parameters in a very efficient way (I want to eliminate the loop). Here's an example...

See https://github.com/pytorch/pytorch/pull/90317 for more context. Right now, entering and exiting a level is done via functions but there is additional added complexity that we would benefit from more structure.

vmap should accept a dim_size=None argument where the user is allowed to specify the size of the dimension being vmapped over. Should behave similarly to JAX's axis_name argument. The net...

actionable
small

```python import torch import functorch dtype = torch.float32 device = torch.device('cpu') def foo(x): return x + 1.0 x = torch.tensor([[0.0]], dtype=dtype, device=device) functorch.make_fx(functorch.vmap(foo))(x) # Works functorch.make_fx(functorch.jacrev(foo))(x) # Works functorch.make_fx(functorch.jacfwd(foo))(x) #...

## Motivation We have a limitation in that if someone has a custom operator, the custom operator kernels for functorch transforms are not allowed to call a functorch transform. This...

Hello @zou3519 , @samdow. TLDR: I got the following error `UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::_sparse_mm. Please file us...

TLDR: Is there a way to optimize model created by combine_state_for_ensemble using torch.backward()? Hi, I am using combine_state_for_ensemble for HyperNet training. ``` fmodel, fparams, fbuffers = combine_state_for_ensemble([HyperMLP() for i in...

Hi, I'm using [Nvidia's PyTorch NGC Docker image 22.02](https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel_22-02.html#rel_22-02), which contains Torch 1.11.0a0+17540c5c. I cannot install any version of Functorch and keep the original version of Torch at the same...