faster-rcnn.pytorch
faster-rcnn.pytorch copied to clipboard
why we should fix some specific layer in ResNet?
I am confused about the line 250 in lib/model/faster_rcnn/resnet.py why we should fix some layer in ResNet? # Fix blocks for p in self.RCNN_base[0].parameters(): p.requires_grad=False for p in self.RCNN_base[1].parameters(): p.requires_grad=False
Shouldn’t these layers be involved in training jointly?
Even I have the same question, did you find an answer to it? If so, could you please share the same?
Because we are using pretrained weights(resnet101_caffe.pth), so in essence we are doing transfer learning, hence it means that all layers have already been trained on imagenet and now we want to train our model on COCO, VOC or custom dataset. So we fix or freeze earlier layers which only detect non specific information and unfreeze later layers so that to train these ones on our new dataset. In this way we minimize the training time