captum
captum copied to clipboard
Error while generating integrated_gradients for custom object detection model
Attached is the full logs,
C:\Users\user\PycharmProjects\yolor_my\yolor_env\Scripts\python.exe C:/Users/user/PycharmProjects/yolor_my/yolor/my_test.py
To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
Person 0.65 [tensor(0.21673, device='cuda:0', requires_grad=True), tensor(19.62591, device='cuda:0', requires_grad=True), tensor(41.16095, device='cuda:0', requires_grad=True), tensor(47.82255, device='cuda:0', requires_grad=True)]
Traceback (most recent call last):
File "C:/Users/user/PycharmProjects/yolor_my/yolor/my_test.py", line 138, in <module>
test_img(r"C:\Users\user\Downloads\t1.png")
File "C:/Users/user/PycharmProjects/yolor_my/yolor/my_test.py", line 122, in test_img
n_steps=200) # integrated_gradients.attribute(torch_image, target=0, n_steps=200)#np.array(torch_image.detach().cpu())
File "C:\Users\user\PycharmProjects\yolor_my\yolor_env\lib\site-packages\captum\log\__init__.py", line 35, in wrapper
return func(*args, **kwargs)
File "C:\Users\user\PycharmProjects\yolor_my\yolor_env\lib\site-packages\captum\attr\_core\integrated_gradients.py", line 291, in attribute
method=method,
File "C:\Users\user\PycharmProjects\yolor_my\yolor_env\lib\site-packages\captum\attr\_core\integrated_gradients.py", line 354, in _attribute
additional_forward_args=input_additional_args,
File "C:\Users\user\PycharmProjects\yolor_my\yolor_env\lib\site-packages\captum\_utils\gradient.py", line 121, in compute_gradients
grads = torch.autograd.grad(torch.unbind(outputs), inputs)
File "C:\Users\user\PycharmProjects\yolor_my\yolor_env\lib\site-packages\torch\autograd\__init__.py", line 204, in grad
inputs, allow_unused)
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [4, 1]], which is output 0 of SliceBackward, is at version 2; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
Process finished with exit code 1
So I was trying to generate the gradients for a yolor model, I have encountered with this error, any leads would help a lot ....
@akashlp27, it looks like there has been an issue while computing the gradients. The error suggests to enable torch.autograd.set_detect_anomaly(True)
for further investigation. This might not be specifically Captum issue. There might be an issue computing the gradients w.r.t. inputs. Perhaps you can first try to use autograd or captum.attr.Saliency and see if you are able to compute the gradients at all.
Hi, ya was able to use autograd with pytorch to generate the gradients, but having issues with generating the integrated gradients. But got same error with finding the Saliency.
@akashlp27, Saliency uses autograd to compute the gradients. It should be the same as using autograd. If you can provide a colab notebook we can debug it.