Monk_Object_Detection
Monk_Object_Detection copied to clipboard
Freezing layers - EfficientDet
Hi!
Is the load_pretrained_model_from
feature performing transfer learning with frozen layers, or would this need to be implemented somewhere else?
I would like to freeze all layers but the last one and train using a pretrained model. How should I do this? Maybe something like the following, but I don't really know where to write it.
efficientdet = self.system_dict["local"]["model"]
for p in efficientdet.parameters(): p.requires_grad = False
for p in efficientdet[-1].parameters(): p.requires_grad = True
Thank you in advance!
I wrote the following lines in train_detector.py in order to perform transfer learning:
-
in Set_Hyperparams: modified the optimizer
self.system_dict["local"]["optimizer"] = torch.optim.Adam(filter(lambda x: x.requires_grad, self.system_dict["local"]["model"].parameters()), self.system_dict["params"]["lr"]);
-
in Train: freeze layers
for name, child in self.system_dict["local"]["model"].module.named_children():
if name in freeze:
where freeze corresponds to the layers to be frozen, e.g. ['conv3', 'conv4', 'backbone_net']print(name + ': FROZEN')
for param in child.parameters():
param.requires_grad = False
else:
print(name + ': UNFROZEN')
for param in child.parameters():
param.requires_grad = True
However I'm still not sure whether this is the correct way to do it
if opt.head_only:
def freeze_backbone(m):
classname = m.__class__.__name__
for ntl in ['EfficientNet', 'BiFPN']:
if ntl in classname:
for param in m.parameters():
param.requires_grad = False
model.apply(freeze_backbone)
print('[Info] freezed backbone')
This is how I am doing it
Btw, I am trying to freeze half of the network but cant pinpoint the initial layers to freeze due to the architecture. Anyone got any idea about how that could be done?