pytorch-cnn-finetune
pytorch-cnn-finetune copied to clipboard
Auto Flattening of features
Hi,
I found this annoying feature/bug as one may like to:
def forward(self, x):
x = self.features(x)
if self.pool is not None:
x = self.pool(x)
if self.dropout is not None:
x = self.dropout(x)
if self.flatten_features_output:
x = x.view(x.size(0), -1)
x = self.classifier(x)
return x
Why do you have self.flatten_features_output
if it is always going to be true? There is no access to that variable. And once the pooling is set to None, the tensor is automatically flattened instead of giving access to the layer.
Any plans of changing it? Or giving access?
I just fixed it for myself and created a pull request.