pretrained-models.pytorch icon indicating copy to clipboard operation
pretrained-models.pytorch copied to clipboard

BNInception architecture

Open hokmund opened this issue 6 years ago • 5 comments

Seems like there is a mistake in BNInception architecture after the 29th Oct commit. I try to use its convolutional part as a pretrained model for transfer learning and get this during the forward pass:

RuntimeError: given groups=1, weight of size [64, 192, 1, 1], expected input[1, 64, 8, 8] to have 192 channels, but got 64 channels instead

hokmund avatar Dec 27 '18 10:12 hokmund

The forward pass of the BNInception has been tested and should be working on pytorch>=0.4.

What is your version of pretrainedmodels? Consider updating: pip install --upgrade pretrainedmodels

Cadene avatar Dec 28 '18 01:12 Cadene

@Cadene He maybe uses fastai...

Bonsen avatar Dec 28 '18 16:12 Bonsen

hi i too face the above issue (inception_3a_1x1): Conv2d(192, 64, kernel_size=(1, 1), stride=(1, 1)) (inception_3a_1x1_bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (inception_3a_relu_1x1): ReLU(inplace) (inception_3a_3x3_reduce): Conv2d(192, 64, kernel_size=(1, 1), stride=(1, 1)) at this layer probabily Could you explain why do we have in chan=192 while we get only 64 from as out from BN layer before this one. I use new pytorch

jaideep11061982 avatar Feb 05 '19 15:02 jaideep11061982

i did upgraded the models... I use fast ai ,point of failure is when it does model.eval() using a dummy batch of shape (1,c,h,w).Evaluation is done using std pytorch

jaideep11061982 avatar Feb 05 '19 16:02 jaideep11061982

please ignore above layers here is error self.inception_3a_3x3_bn = nn.BatchNorm2d(64, affine=True) self.inception_3a_relu_3x3 = nn.ReLU (inplace) self.inception_3a_double_3x3_reduce = nn.Conv2d(192, 64, kernel_size=(1, 1), stride=(1, 1))

Batch norm would return only 64 channels of certain sizes ,reduce layer needs 192 channels

jaideep11061982 avatar Feb 06 '19 06:02 jaideep11061982