GHM_Detection
GHM_Detection copied to clipboard
consulting some questions
Hi, Thanks for your great work and sharing code! I have some questions:
-
In line 47 of ghm_loss.py, you update it with the following code: self.acc_sum[i] = mmt * self.acc_sum[i] + (1 - mmt) * num_in_bin. According to https://github.com/libuyu/GHM_Detection/issues/14#issue-422548624, you explained 'self.acc_sum would consider not only samples in the current batch, but also its previous value'. However, according the updating of self.acc_sum in code or the updating equation (12) in paper, I think at the iteration t, the self.acc_sum in the i-th bin only depends on the previous self.acc_sum. Whether I miss something?
-
In https://github.com/libuyu/GHM_Detection/issues/4#issuecomment-458033785, you explained 'sum[i+1] = mmt * sum[i] + (1 - mmt) * num[i]', it seams that at each iteration t, the acc_sum in the (i+1)-th bin depends on the acc_sum in i-th bin, Is it wrong?
-
When I only use the ghm-c loss In the pixel-level classification in the segmentation task, the loss do not decrease, could you give me some suggestions?
Thank for you again and sorry for bothering you. Looking forward to your reply.
1&2: The exponential moving average is just a widely used technique to keep a variable more stable during updating (see wiki). The SGD optimizer also adopts it and has a parameter of momentum. If mmt is 0.75 here, the new acc_sum will come from 75% of the last acc_sum and 25% of the new calculated sample distribution (num_in_bin). You just missed num_in_bin.
3: In segmentation work, I think you can simply use two fixed loss weights for pos/neg samples to balance the loss.
Thanks.