pytorch-mobilenet-v2
pytorch-mobilenet-v2 copied to clipboard
About the invert res-block module
Hi, I have a question about the invert res-block module in your code. When I implemented this model, I feel confused about how to build the invert res-block module. I found that in your code, you use this: self.use_res_connect = self.stride == 1 and inp == oup to ensure that input channels match with output channels. However, the input channels and output channels are always different. Thus, seems that there is no use for the this skip connection because (inp == oup) is always false. Hope you can reply this issue, thanks very match.
input channel and output channel are always the same.
For each inverted residual sequence, input channel and output channel are the same except for the first layer.
I have already understood, thanks anyway
But does this is same with the original paper?
The paper indicated that 'when input layer depth is 0 the underlying conv is the identity function.'
I'm just curious that whether the architecture should add an extra conv node for the first layer shortcut.
The other thing is that, even though it is not indicated in the paper. But making the batch norm before the conv is possible to improve the accuracy. just like the resnet v2.