VBLC
VBLC copied to clipboard
ASK FOR HELP
Dear Doctor: Your work is excellent! I have some questions that I would like to ask for your help. I added LCL to my UDA model, the way I did it was to take 2-norm on logit before it passed in the cross-entropy loss function, I changed the eps from 1e-7 to 1e-3 due to using AMP, but after adding LCL, my loss function curve keeps going up. Did I do anything wrong?
2-norm
norms = torch.norm(logits_src, p=2, dim=1, keepdim=True) + 1e-3
normed_logit = torch.div(logits_src, norms)
norms_hr = torch.norm(hr_logits_src, p=2, dim=1, keepdim=True) + 1e-3
normed_logit_hr = torch.div(hr_logits_src, norms_hr)
cross-entropy loss
loss_src =0.9 * self.loss(normed_logit, gt_src) + \
0.1 * self.loss(normed_logit_hr, cropped_gt_src)
Looking forward to your help! Thanks! best regards!
Hi chenyang, thanks for your interest in our work! To be honest, I am not so familiar with AMP, but I will try my best to give some suggestions. In LCL, we use epsilon (eps) to prevent the divide-by-zero error when dividing logits by norms. Therefore, we expect the value of eps to be close to zero, or at least much smaller than norms. For your case, perhaps 1e-3 is too large a number for such purpose, and could ruin the training process. I would suggest starting from assigning eps with the smallest floating point number available. Another solution could be forcing full precision while performing the division using eps=1e-7, and then change back the result to the intended precision. Feel free to contact me if you have any further questions.
Dear Doctor: Thanks for your help! I tried using full precision locally, but still had a rising loss function. I noticed two issues:
- Does pseudo-weight require additional processing after taking a 2-norm approach to logits;
- Is the Thing-Class ImageNet Feature Distance (FD) Loss in daformer not used in your work, because this loss conflicts with other functions ? Looking forward to your help! Thanks! best regards!
Hi chenyang,
-
For 1, as you can see from our implementation, the pseudo_weight is directly calculated from logits before normalization. Therefore, I think no extra processing is required.
-
For 2, we haven't tried FD loss in VBLC, so I can't say for certain the effect of it. Feedbacks are welcome if you would like to share the influence of this loss on our method!
Thank you for your reply! I will try to add/delete FD loss later for experimentation and will give you feedback if it works.
Dear Doctor: Your work is excellent! I have some questions that I would like to ask for your help. I added LCL to my UDA model, the way I did it was to take 2-norm on logit before it passed in the cross-entropy loss function, I changed the eps from 1e-7 to 1e-3 due to using AMP, but after adding LCL, my loss function curve keeps going up. Did I do anything wrong?
2-norm
norms = torch.norm(logits_src, p=2, dim=1, keepdim=True) + 1e-3 normed_logit = torch.div(logits_src, norms) norms_hr = torch.norm(hr_logits_src, p=2, dim=1, keepdim=True) + 1e-3 normed_logit_hr = torch.div(hr_logits_src, norms_hr)cross-entropy loss
loss_src =0.9 * self.loss(normed_logit, gt_src) + \ 0.1 * self.loss(normed_logit_hr, cropped_gt_src)Looking forward to your help! Thanks! best regards!
Hello, Could you tell me about the versions of mmcv and other dependency packages? I encountered some version issues while installing the dependency packages. Thank you!
Hi TiSgrc, how about providing more details on the issues (e.g., the commands you run, the procedures you take, and the traceback of errors) so that we can have a better idea of where to look into?
Hi TiSgrc, how about providing more details on the issues (e.g., the commands you run, the procedures you take, and the traceback of errors) so that we can have a better idea of where to look into?
Hi KiwiXR, I am glad to receive your reply. The specific error is as follows. Actually, the versions of mmseg, mmcv, and pytorch do not match. May I ask what your versions are? Can you take a look at your pip list?
Hi TiSgrc, the recommended combination is mmcv-full==1.5.0 torch==1.10.2+cu113, and there is no need to install mmseg as this repo itself is a modification from it. You can just run the scripts in /root/VBLC/ to ensure the right mmseg is imported. Please check our README.md and requirements.txt for the detailed dependencies and our instructions for reproduce.
Hi TiSgrc, the recommended combination is
mmcv-full==1.5.0 torch==1.10.2+cu113, and there is no need to install mmseg as this repo itself is a modification from it. You can just run the scripts in/root/VBLC/to ensure the right mmseg is imported. Please check our README.md and requirements.txt for the detailed dependencies and our instructions for reproduce.
Thank you for your reply, it is very important to me. I will try this version. Thank you again for your reply, and I wish you a smooth research!