pytorch-mobilenet-v2 icon indicating copy to clipboard operation
pytorch-mobilenet-v2 copied to clipboard

About the invert res-block module

Open hq-liu opened this issue 7 years ago • 5 comments

Hi, I have a question about the invert res-block module in your code. When I implemented this model, I feel confused about how to build the invert res-block module. I found that in your code, you use this: self.use_res_connect = self.stride == 1 and inp == oup to ensure that input channels match with output channels. However, the input channels and output channels are always different. Thus, seems that there is no use for the this skip connection because (inp == oup) is always false. Hope you can reply this issue, thanks very match.

hq-liu avatar Jan 24 '18 02:01 hq-liu

input channel and output channel are always the same.

FatherOfHam avatar Jan 24 '18 03:01 FatherOfHam

For each inverted residual sequence, input channel and output channel are the same except for the first layer.

tonylins avatar Jan 24 '18 03:01 tonylins

I have already understood, thanks anyway

hq-liu avatar Jan 24 '18 05:01 hq-liu

But does this is same with the original paper?

The paper indicated that 'when input layer depth is 0 the underlying conv is the identity function.'

I'm just curious that whether the architecture should add an extra conv node for the first layer shortcut.

foreverYoungGitHub avatar Jan 30 '18 15:01 foreverYoungGitHub

The other thing is that, even though it is not indicated in the paper. But making the batch norm before the conv is possible to improve the accuracy. just like the resnet v2.

foreverYoungGitHub avatar Jan 30 '18 15:01 foreverYoungGitHub