pytorch-grad-cam icon indicating copy to clipboard operation
pytorch-grad-cam copied to clipboard

Swin Transformer cannot cam-->grad can be implicitly created only for scalar outputs

Open woodszp opened this issue 1 year ago • 2 comments

File "/home/xxx/Project/latested/camvisual.py", line 227, in grayscale_cam = cam(input_tensor=img_tensor, targets=target_category) File "/home/xxx/miniconda3/lib/python3.10/site-packages/pytorch_grad_cam/base_cam.py", line 188, in call return self.forward(input_tensor, File "/home/xxx/miniconda3/lib/python3.10/site-packages/pytorch_grad_cam/base_cam.py", line 84, in forward loss.backward(retain_graph=True) File "/home/xxx/miniconda3/lib/python3.10/site-packages/torch/tensor.py", line 488, in backward torch.autograd.backward( File "/home/xxx/miniconda3/lib/python3.10/site-packages/torch/autograd/init.py", line 190, in backward grad_tensors = make_grads(tensors, grad_tensors, is_grads_batched=False) File "/home/xxx/miniconda3/lib/python3.10/site-packages/torch/autograd/init.py", line 85, in _make_grads raise RuntimeError("grad can be implicitly created only for scalar outputs") RuntimeError: grad can be implicitly created only for scalar outputs

woodszp avatar Aug 06 '23 13:08 woodszp

Issue Resolved

If you are using the Swin Transformer, you may need to modify or add the following code: Please add this code at the end.

def forward():
  ...
  outputs = self.activations_and_grads(input_tensor)
  # add this way 
  out_nor = self.model.norm(outputs)
  out_poll = torch.flatten(self.model.avgpool(out_nor.transpose(1, 2)), 1)
  outputs = self.model.head(out_poll)

woodszp avatar Aug 08 '23 06:08 woodszp

Thank you for sharing the code, It is important not to ignore the output that hasn't been flattened

Mahiro2211 avatar Dec 03 '23 08:12 Mahiro2211