pytorch-grad-cam
pytorch-grad-cam copied to clipboard
How to extract cam during training without creating issues to the bakpropagation
Hi,
I'm implementing the following custom function:
def return_cam_from_model(model, target_layer, batch, targets , cam_name = "gradcam"):
model.eval() # da rimettere in .train() se nel training loop cam = None if cam_name == "gradcam": cam = GradCAM(model=model, target_layers=target_layer, use_cuda=False) elif cam_name == "scorecam": cam = ScoreCAM(model=model, target_layers=target_layer, use_cuda=True) elif cam_name == "fullcam": cam = FullGrad(model=model, target_layers=target_layer, use_cuda=True) elif cam_name == 'gradcamplspls': cam = GradCAMPlusPlus(model=model, target_layers=target_layer, use_cuda=True) elif cam_name == 'xgradcam': cam = XGradCAM(model=model, target_layers=target_layer, use_cuda=True) elif cam_name == 'eigencam': cam = EigenCAM(model=model, target_layers=target_layer, use_cuda=True) elif cam_name == 'eigengradcam': cam = EigenGradCAM(model=model, target_layers=target_layer, use_cuda=True) elif cam_name == 'layercam': cam = LayerCAM(model=model, target_layers=target_layer, use_cuda=True) elif cam_name == 'fullgrad': cam = FullGrad(model=model, target_layers=target_layer, use_cuda=True) elif cam_name == 'hirescam': cam = HiResCAM(model=model, target_layers=target_layer, use_cuda=True) elif cam_name == 'gradcamelementwise': cam = GradCAMElementWise(model=model, target_layers=target_layer, use_cuda=True) elif cam_name == 'NOXAI': return 0 else: raise Exception("Cam name", cam_name, "not recognized") targets = [ClassifierOutputTarget(i) for i in targets] ret_cam = cam(input_tensor=batch, targets=targets) model.train() return ret_cam
we have seen that model.train() before the CAM execution produces a wrong cam explanation, so we inverted their order.
Therefore, because the cam execution is affected by the model training, do you know if inside the cam functions there is something that may affect the model training/backward? And how to deal with it?
I'm trying to get tensors with gradient after the library execution
I'm trying to get tensors with gradient after the library execution
Hi, What exactly do you mean by that?