gdrnpp_bop2022
gdrnpp_bop2022 copied to clipboard
Freeze Layers for Training
Have you implemented a way to freeze training for certain layers of the network? I have the backbone and geometric feature portion of the network trained, I would like to fine-tune the Patch-PnP and the head.
If you have not implemented this, would you recommend editing the model itself, or do it some other way?
You can set the parameter "FREEZE" to True.
But this does not work if I need to train up to a checkpoint with all layers trainable, then freezing some and resuming the training. I think the optimizer finds a mismatch in parameters from the checkpoint.