pytorch-grad-cam
pytorch-grad-cam copied to clipboard
Memory leakage during multiple loss calculations
Thank you for publishing this software package, it is very useful for the community. However, when I perform multiple loss calculations, the memory will continue to grow and will not be released in the end, resulting in the memory continuously increasing during the training process. When I restart a CAM object in a loop, even if the object is deleted at the end of the loop, memory accumulates. Have you ever thought about why this situation happened and how can I avoid it?
Hi, We support a "with" clause that should solve this. Can you please try, for example with:
with GradCAM(model=model,
target_layers=target_layers) as cam: