DCFNet
DCFNet copied to clipboard
VID2015 and Center loss
Hi,
Thank you for great job. I have few questions:
- In the previous version DCFNet trained on uav123, nus-pro and tc-128. the current one is defined to VID2015, why?
- Why you don't use the CenterLoss loss anymore? Why in the first place the CenterLoss don't propagate(only forward but no backward)?
Thank you
- The recent trend is to use VID to train.
-
First, the amount of video in VID is very large(~1 million images, and more than 4,000 videos). This number is far exceeding all the tracking datasets.
-
On the other side, there is a common belief that there is a risk of overfitting when training on the tracking datasets. VOT committee expressly prohibit training on this dataset(Learning from the tracking datasets (OTB, VOT, ALOV, NUSPRO) is prohibited.).
- CenterLoss was inherited from SiamFC directly to visualize the convergence conditions.
- But I find it's too slow and provides no additional information. So I decide to remove this loss.
- CenterLoss is only used to visualize the convergence (Like Top1/5 in image classification). This loss is non-differentiable( just think top1 loss. Only cross entropy can bp).
Thanks for your attention.