RecurrentAttentionConvolutionalNeuralNetwork icon indicating copy to clipboard operation
RecurrentAttentionConvolutionalNeuralNetwork copied to clipboard

RuntimeError: grad can be implicitly created only for scalar outputs

Open heyongcs opened this issue 6 years ago • 2 comments

when use batch_size=32 not 1, get this error info:

File "/mnt/workspace/py/RA_CNN/src/manager.py", line 163, in train self.do_epoch(epoch_idx, optimizer, optimize_class=optimize_class) File "/mnt/workspace/py/RA_CNN/src/manager.py", line 113, in do_epoch self.do_batch(optimizer, batch, label, optimize_class=optimize_class) File "/mnt/workspace/py/RA_CNN/src/manager.py", line 101, in do_batch self.criterion_rank(scores[i-1], scores[i], label).backward(retain_graph=retain_graph) File "/root/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py", line 167, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables) File "/root/anaconda3/lib/python3.6/site-packages/torch/autograd/init.py", line 87, in backward grad_variables, create_graph = _make_grads(variables, grad_variables, create_graph) File "/root/anaconda3/lib/python3.6/site-packages/torch/autograd/init.py", line 35, in _make_grads raise RuntimeError("grad can be implicitly created only for scalar outputs") RuntimeError: grad can be implicitly created only for scalar outputs

heyongcs avatar May 22 '18 02:05 heyongcs

I modify the RankLoss Func in Manage.py:

    ps1 = F.softmax(scores1).gather(1, target.long().view(-1, 1))
    ps2 = F.softmax(scores2).gather(1, target.long().view(-1, 1))
    return torch.mean(torch.clamp(ps1 - ps2 + self.margin, min=0))

is right?

heyongcs avatar May 22 '18 06:05 heyongcs

interesting did this work for you?

dillondavis avatar Jul 18 '18 02:07 dillondavis