torchfix
torchfix copied to clipboard
[New Rule Request] Torchfix should issue a warning when a torch.nn.Module stores its layers in a python list instead of a torch.nn.ModuleList
Inside a model definition, the torch.nn.Module
objects inside a Python list do not get their parameters registered. Hence such parameters do not get trained by the optimizer, even though they are in the call graph formed by forward(). This should be flagged by torchfix -- currently no warning is given for this issue.
Example:
class FeedForward(torch.nn.Module):
def __init__(self, n_features, n_classes, n_hidden, width):
super().__init__()
# Ideally, torchfix should issue a warning on below code
# The parameters of the hidden layers do not get registered if they are in a list, and are not optimized!
self.hidden_layers = [torch.nn.Linear(n_features if i ==0 else width, width, bias=True) for i in range(n_hidden)]
# Correct version of the above code -- use ModuleList([]) instead of python list []
self.hidden_layers = torch.nn.ModuleList([torch.nn.Linear(n_features if i ==0 else width, width, bias=True) for i in range(n_hidden)])
# Dummy call to torch.solve() to throw a torchfix warning (to demonstrate that torchfix is working correctly)
torch.solve()
Torchfix output:
$ torchfix --select=ALL ./supervised/nn/feed_forward_nn.py
supervised/nn/feed_forward_nn.py:20:9: TOR001 Use of removed function torch.solve: https://github.com/pytorch-labs/torchfix#torchsolve
Finished checking 1 files.