pytorch_fft
pytorch_fft copied to clipboard
Autogrid error with multiple ffts
Hi, I'm trying to do some linear operations with two ffts.
class fft_autotest(torch.nn.Module):
def __init__(self):
super(fft_autotest, self).__init__()
def forward(self, x1, x2):
f = fft.Fft()
x1_fre,x1_fim = f(x1,torch.zeros_like(x1))
x2_fre,x2_fim = f(x2,torch.zeros_like(x2))
return x1_fre+x2_fre
x1 = Variable(torch.rand(3,2).cuda(), requires_grad=True)
x2 = Variable(torch.rand(3,2).cuda(), requires_grad=True)
func = fft_autotest();
test = gradcheck(func, (x1,x2), eps=1e-2)
print(test)
which will output error
RuntimeError: for output no. 0,
numerical:(
1.0000 1.0000 0.0000 0.0000 0.0000 0.0000
1.0000 -1.0000 0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 1.0000 1.0000 0.0000 0.0000
0.0000 0.0000 1.0000 -1.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000 1.0000 1.0000
0.0000 0.0000 0.0000 0.0000 1.0000 -1.0000
[torch.FloatTensor of size 6x6]
,
1.0000 1.0000 0.0000 0.0000 0.0000 0.0000
1.0000 -1.0000 0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 1.0000 1.0000 0.0000 0.0000
0.0000 0.0000 1.0000 -1.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000 1.0000 1.0000
0.0000 0.0000 0.0000 0.0000 1.0000 -1.0000
[torch.FloatTensor of size 6x6]
)
analytical:(
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
[torch.FloatTensor of size 6x6]
,
2 2 0 0 0 0
2 -2 0 0 0 0
0 0 2 2 0 0
0 0 2 -2 0 0
0 0 0 0 2 2
0 0 0 0 2 -2
[torch.FloatTensor of size 6x6]
)
The interesting observation is that the second analytical output equals to the summation of the numerical outputs. I tried with different output function and thing always holds. Any ideas why this coincidence happens?
Thanks!
This seems related to #13 but I haven't been able to figure this out. It might have to do with certain assumptions being violated on the underlying memory, but as of right now I haven't found an answer.
@riceric22 Thanks for the reply!
One more hint I found. After I called backward()
torch.autograd.backward(func.forward(x1,x2), [torch.ones(x1.size()).cuda()])
x1.grad
is None
, which means the gradient didn't pass to it. This is kind of wired for me since autograd
will compute the gradient w.r.t all leaf notes (I do check 'x1.is_leaf' and it's true
).