odl
odl copied to clipboard
pytorch autograd depreciated
Hi!
I am trying to run the jupyter notebook: part3_learned_reconstruction_pytorch.ipynb from the odlworkshop . I use pytorch 1.7.0 and cuda 10.1.
I get the following error message:
RuntimeError Traceback (most recent call last)
~/.local/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(),
~/.local/lib/python3.6/site-packages/odl/contrib/torch/operator.py in forward(self, x) 393 results = [] 394 for i in range(x_flat_xtra.data.shape[0]): --> 395 results.append(self.op_func(x_flat_xtra[i])) 396 397 # Reshape the resulting stack to the expected output shape
~/.local/lib/python3.6/site-packages/torch/autograd/function.py in call(self, *args, **kwargs) 158 def call(self, *args, **kwargs): 159 raise RuntimeError( --> 160 "Legacy autograd function with non-static forward method is deprecated. " 161 "Please use new-style autograd function with static forward method. " 162 "(Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)")%22))
RuntimeError: Legacy autograd function with non-static forward method is deprecated. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)
I wonder if it is possible to update odl for the new version of autograd in pytorch?
Kind Regards, Louise
I meet the same problem!
Just lower your pytorch version to below 1.3, but I am also solving this problem that occurs above version 1.3, and I am researching...
Hi,
I am using almost the latest pytorch (1.12.1.post201) and I have no such problem with the binding. I imagine that part3_learned_reconstruction_pytorch.ipynb can have some outdated code, however the following code runs as expected:
import matplotlib.pyplot as plt import numpy as np import odl import torch from odl.contrib.torch import OperatorModule
print(torch.version)
X = odl.uniform_discr([-10, -10], [10, 10], (100,100)) x = odl.phantom.shepp_logan(X)
apart = odl.uniform_partition(0, 2*np.pi, 100) dpart = odl.uniform_partition(-30, 30, 100) geometry = odl.tomo.FanBeamGeometry(apart=apart, dpart=dpart, src_radius=15, det_radius=15) operator = odl.tomo.RayTransform(X, geometry) pt_op = OperatorModule(operator) pt_x = torch.from_numpy(x.asarray().reshape(1,1,*x.shape)).cuda()
plt.imshow(pt_op(pt_x).detach().cpu().numpy().squeeze())
Hi,
I am using almost the latest pytorch (1.12.1.post201) and I have no such problem with the binding. I imagine that part3_learned_reconstruction_pytorch.ipynb can have some outdated code, however the following code runs as expected:
import matplotlib.pyplot as plt import numpy as np import odl import torch from odl.contrib.torch import OperatorModule
print(torch.version)
X = odl.uniform_discr([-10, -10], [10, 10], (100,100)) x = odl.phantom.shepp_logan(X)
apart = odl.uniform_partition(0, 2*np.pi, 100) dpart = odl.uniform_partition(-30, 30, 100) geometry = odl.tomo.FanBeamGeometry(apart=apart, dpart=dpart, src_radius=15, det_radius=15) operator = odl.tomo.RayTransform(X, geometry) pt_op = OperatorModule(operator) pt_x = torch.from_numpy(x.asarray().reshape(1,1,*x.shape)).cuda()
plt.imshow(pt_op(pt_x).detach().cpu().numpy().squeeze())
I still meet error when use your example.(pytorch==1.10.0,1.8.0)