corenet
corenet copied to clipboard
'freeze_modules_based_on_opts()' is freezing module parameters twice
Example
opts = argparse.Namespace(**{"model.freeze_modules": "conv1"})
model = nn.Sequential(
OrderedDict([
('conv1', nn.Conv2d(20,64,5)),
('conv2', nn.Conv2d(20,64,5))
])
)
print(freeze_modules_based_on_opts(opts, model))
Example
opts = argparse.Namespace(**{"model.freeze_modules": "conv1"}) model = nn.Sequential( OrderedDict([ ('conv1', nn.Conv2d(20,64,5)), ('conv2', nn.Conv2d(20,64,5)) ]) ) print(freeze_modules_based_on_opts(opts, model))
Example
opts = argparse.Namespace(**{"model.freeze_modules": "conv1"}) model = nn.Sequential( OrderedDict([ ('conv1', nn.Conv2d(20,64,5)), ('conv2', nn.Conv2d(20,64,5)) ]) ) print(freeze_modules_based_on_opts(opts, model))
``mu
GOT IT
Freezing modules (i.e. disabling .train()
) is different with freezing parameters (i.e. setting requires_grad=False
). As this issue is closely related to #10 , I'm closing this one.