Cyrillic support
Title says it all, really.
@fourson .eval() won't help, this only changes the mode of the module. To freeze it you need to do self.loss.requires_grad_(False).
It is not possible to remove a parameter from a PyTorch nn.Module directly, and so Lightning can't provide that functionality. But you can use a trick if you really care about it: If you include your loss into a list, PyTorch won't associate it as a submodule: self.loss = [loss] (and then access it via self.loss[0] in your code). The drawback is that self.loss[0] won't be moved to the GPU automatically.
@fourson What do you think about my comment above? Btw I also opened #19468 which adds a column "Mode" to the summary that shows which layers are in training mode. Your frozen layers would show up as "eval" there. Maybe that's helpful in your case.
For the reasons outlined in my previous comment, I don't think we can provide a mechanism to "exclude" parameters and modules. I believe any hacks we could do would go against PyTorch's design.
@fourson What do you think about my comment above? Btw I also opened #19468 which adds a column "Mode" to the summary that shows which layers are in training mode. Your frozen layers would show up as "eval" there. Maybe that's helpful in your case.
For the reasons outlined in my previous comment, I don't think we can provide a mechanism to "exclude" parameters and modules. I believe any hacks we could do would go against PyTorch's design.
Thanks for the comment. The "eval" flag shown in the summary could be very helpful for further usage.