SKNet_pytorch
SKNet_pytorch copied to clipboard
FC implement
hello @ResearchingDexter for fully connected (fc) layer implement, why not use nn.Linear() to do, i think that nn.Linear() -->bn --> relu, and why did you use bias=false in your conv2d? Thanks, look forward to your early reply.
The reason why I used conv2d to implent the fully connected layer is that the author of the SKNet adopted the conv2d. Because there is a bias in the bn layer, it is not necessary to use bias=True in the conv2d layer.
ok. I get it. thanks.
Sure.
output=self.fc(output) output=self.softmax(output) return output The softmax function is needed here?
s=self.global_pool(U)
z=self.fc1(s)
a_b=self.fc2(z)
a_b=a_b.reshape(batch_size,self.M,self.out_channels,-1)
a_b=self.softmax(a_b)
You mean that here? @XUYUNYUN666
s=self.global_pool(U) z=self.fc1(s) a_b=self.fc2(z) a_b=a_b.reshape(batch_size,self.M,self.out_channels,-1) a_b=self.softmax(a_b)
You mean that here? @XUYUNYUN666
No, I point that the line 86, the softmax after the last fc layer, it is uneccessary?
The reason why I used conv2d to implent the fully connected layer is that the author of the SKNet adopted the conv2d. Because there is a bias in the bn layer, it is not necessary to use bias=True in the conv2d layer.
I have some view about the con2d instead of fc layer, because there is the channel or spatial weigting , not the last classifier layers, we shoud keep the dimension(four- dimensions) for element-wise product for channel or spatial dimension
It is not necessary, because It depends on the loss function that you used. And I don't get your views about the conv2d instead of fc layer. @XUYUNYUN666