functorch
functorch copied to clipboard
CUDA assumption in the ts_compile code
Hey folks, stumbled into a CUDA assumption (on my non-CUDA machine)
Here's the fix for me, but it's obviously not very general
diff --git a/functorch/_src/compilers.py b/functorch/_src/compilers.py
index 10fe42a..83002ac 100644
--- a/functorch/_src/compilers.py
+++ b/functorch/_src/compilers.py
@@ -60,7 +60,7 @@ def ts_compile(fx_g: fx.GraphModule, _) -> Callable:
for i in range(1000):
attr = f"_tensor_constant{i}"
if hasattr(fx_g, attr):
- setattr(fx_g, attr, getattr(fx_g, attr).cuda())
+ setattr(fx_g, attr, getattr(fx_g, attr))
else:
break
@bwasti where did you encounter that ? Code for main branch is a bit different: https://github.com/pytorch/functorch/blob/76178743084277cf6d7cac752279f905cdd60e13/functorch/_src/compilers.py#L24-L62
Yeah it's removed on main I believe - this was a hack we used to have to work around nvfuser limitations. But that's been fixed now.