FGD
FGD copied to clipboard
very slow training
Line 86 in the forward function in mmdet/distillation/losses/fgd.py There are two for loops here, in my test, these two for loops slow down the training seriously, is there any solution?
The for loops is build upon batch size, I guess there should not be some great slowing, I mean there are just 16 samples in total.
I also encountered the same problem, have you solved it?
Is there need to add a line ' with torch.no_grad(): ' when get the mask?