Cydia2018
Cydia2018
> don't need remove it, I fix the bug then it can work well. Thanks! I train the FCOS on the voc dataset, and the final mAP is about 0.67....
> High mAP in papers always due to many tricks and data augmentation. My implementation doesn't use any data augmentation, which is very important for getting a good result. And...
> It's really not easy to train a network. The difference between 0.67 and 0.69 is really small, and they are both not good result on VOCdataset. Did you train...
> A common approach is to train on VOC07 and VOC12 train+val, then eval on VOC07 test. The dataset setting you used is rare. 0.02 mAP gap can not measure...
> A common approach is to train on VOC07 and VOC12 train+val, then eval on VOC07 test. The dataset setting you used is rare. 0.02 mAP gap can not measure...
> It depends on whether you train it from scratch and how large the batch size is. You can set epochs up a little bigger,such as 50 or 100, and...
pillow降到6.2.2即可;)
> > 作者可否将剪枝部分的代码也放出,对论文剪枝部分的内容和目前的代码还存在一些疑惑,感谢。 > > 之前没有打算放剪枝的代码,一方面是代码比较乱,另一反面觉得我的实现方式有点简单,RMNet剪枝应该有更好的表现,希望别人在我放出来的模型基础上能实现的更好。不过既然有需要,我还是把代码整理了出来: https://github.com/fxmeng/RMNet/blob/242f849c6e5e891646bbc90f89310268d183c310/train_pruning.py 感谢您的工作!
您好,我跑了剪枝训练的代码,发现不收敛。于是根据Network Slimming的思想做了一些改动 ```python def update_mask(self,sr,threshold): for m in self.modules(): if isinstance(m,nn.Conv2d): if m.kernel_size==(1,1) and m.groups!=1: m.weight.grad.data.add_(sr * torch.sign(m.weight.data)) # m1 = m.weight.data.abs()>threshold # m.weight.grad.data*=m1 # m.weight.data*=m1 ``` ```python def prune(self,use_bn=True,threshold=0.1):...
have you solved this problem?