mmpretrain icon indicating copy to clipboard operation
mmpretrain copied to clipboard

Can I apply 'retain_graph' during training?

Open ChengHan111 opened this issue 2 years ago • 0 comments

Checklist

  • I have searched related issues but cannot get the expected help.
  • I have read related documents and don't know what to do.

Describe the question you meet

Can I use retain_graph =True when doing backprop? Hi, I want to save some not_leaf graph for backprop, I applied it with retain_grad() and plug in loss function, but the loss is not decreasing. Any idea on what I should do to keep the gradient for gradient descent? Thanks

Here is a sample code:

new_means = constant * matrix(none gradient) + constant * matrix(with gradient) new_means.retain_grad() # print(new_means.requires_grad) # True # print(new_means.grad) # None # print(new_means.is_leaf) # False ... #first loss losses = self.loss(out_cls, gt_label, **kwargs) #second loss loss_average_contrast = nn.MSELoss(reduction='none') loss_result = torch.mean(loss_average_contrast(new_means , final_prob_target (whatever it is, just target)), dim = -1) losses['loss_avg'] = loss_result * constant(weight)

loss is not decreasing for the 2nd one while the 1st loss does decrease, has applied normalization for both new_means and final_prob_target. When I change it into leaf, the 2nd loss does decrease, seems that I get some wrong settings in retain_grad

Post related information

  1. The output of pip list | grep "mmcv\|mmcls\|^torch" mmcls 0.18.0
    mmcv-full 1.4.1 torch 1.8.1 torchaudio 0.8.0a0+e4e171a torchfile 0.1.0 torchvision 0.9.1
  2. Your config file if you modified it or created a new one.

mmclassification/mmcls/core/utils/dist_utils.py add retain_graph=True in backward()

ChengHan111 avatar Sep 14 '22 21:09 ChengHan111