vision icon indicating copy to clipboard operation
vision copied to clipboard

The implementation of ResNet is different from official implementation in Caffe

Open lyuwenyu opened this issue 7 years ago • 12 comments

The downsample part in each block/layer (not the skip connection part), the PyTorch do it in conv3x3 using stride=2, but official caffe version in conv1x1 with stride=2

conv1x1 -> caffe do it in here
conv3x3 -> pytorch do it in here
conv1x1

Here in Bottleneck:

        self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
                               padding=1, bias=False)

  (layer2): Sequential (
    (0): Bottleneck (
      (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
       ...

but in caffe


layer {
	bottom: "res2c"
	top: "res3a_branch2a"
	name: "res3a_branch2a"
	type: "Convolution"
	convolution_param {
		num_output: 128
		kernel_size: 1
		pad: 0
		stride: 2
		bias_term: false
	}
}

lyuwenyu avatar Jun 26 '17 02:06 lyuwenyu

From what I see, the torchvision implementation also uses 1x1 convolution kernels when downsampling, see here for example

fmassa avatar Sep 03 '17 18:09 fmassa

This is only partially true (and the issue should not be closed). Downsample is one of the convolutions that should have stride 2 (and it has, like you pointed out, @fmassa), but there is also convolutions in bottleneck block (which the original issue was referencing) - see here. Here also it is the first convolution (1x1) that should have stride=stride, not the second convolution (3x3).

ptrendx avatar Oct 24 '17 22:10 ptrendx

Also, Table 1 in the paper describes that "downsampling is performed by conv3_1, conv4_1, and conv5_1 with a stride of 2." If I am not missing something in the code, it seems that the Bottleneck layer is using stride 2 in the second convolution, instead of using it in the first convolution (as pointed out by @lyuwenyu and @ptrendx). For instance, in the last convolutional group, we have a Bottleneck following this pattern:

# Bottleneck layer
out = conv1_bn_relu(out, kernel=1, stride=1)
out = conv2_bn_relu(out, kernel=3, stride=2)
out = conv3_bn_relu(out, kernel=1, stride=1)

While it should be:

out = conv1_bn_relu(out, kernel=1, stride=2)
out = conv2_bn_relu(out, kernel=3, stride=1)
out = conv3_bn_relu(out, kernel=1, stride=1)

out = conv1_bn_relu(out, kernel=1, stride=1)
out = conv2_bn_relu(out, kernel=3, stride=1)
out = conv3_bn_relu(out, kernel=1, stride=1)

out = conv1_bn_relu(out, kernel=1, stride=1)
out = conv2_bn_relu(out, kernel=3, stride=1)
out = conv3_bn_relu(out, kernel=1, stride=1)

If you paste the original prototxt in this network visualizer, in the last convolutional group only conv5_1 (res5a_branch2a) has stride 2, the following have stride 1.

EDIT: clarity and corrected possible fix

I think it could be fixed by changing here to:

self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, stride=stride, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1,

victorhcm avatar Oct 30 '17 03:10 victorhcm

While I agree that the definition of the Bottleneck module seems to be different than the one mentioned in the original paper, I believe that what is currently done is throws away much less information in the beginning of each block (at the expense of a smaller receptive field). Indeed, the original implementation seems to throw away 75% of the input of each residual at the beginning of each Bottleneck module (1x1 conv with stride of 2).

Note that the BasicBlock architecture follows the right pattern.

I'm reopening the issue and tagging @colesbury (who originally implemented ResNet in PyTorch). To summarize, the original paper mentions downsampling to happen here, while we are doing it here. The same is present in fb-resnet.torch.

fmassa avatar Nov 12 '17 16:11 fmassa

That makes sense, @fmassa. A lot is being discarded in the original implementation.

To add to this discussion, according to this user, Kaiming He wrote:

In all experiments in the paper, the stride=2 operation is in the first 1x1 conv layer when downsampling. This might not be the best choice, as it wastes some computations of the preceding block. For example, using stride=2 in the first 1x1 conv in the first block of conv3 is equivalent to using stride=2 in the 3x3 conv in the last block of conv2. So I feel applying stride=2 to either the first 1x1 or the 3x3 conv should work. I just kept it “as is”, because we do not have enough resources to investigate every choice.

I actually tried to fine-tune both variations to my task (which possibly isn't the most suitable way to evaluate it, though), and they both gave similar results.

victorhcm avatar Nov 20 '17 13:11 victorhcm

I try to summarise:

  1. implementation of ResNet in PyTorch does differ from the one in Kaiming He's original paper: it transfers the responsibility for downsampling from the first 1x1 convolutional layer to the 3x3 convolutional layer in Bottleneck.
  2. This kind of variation is also known as "ResNet V1.5" as mentioned in https://github.com/pytorch/vision/issues/1266, which seems to be defined by NVIDIA according to https://github.com/NVIDIA/DeepLearningExamples/issues/419#issuecomment-597643335.
  3. The effects of this modification in practice has been pointed out here by NVIDIA

This difference makes ResNet50 v1.5 slightly more accurate (~0.5% top1) than v1, but comes with a small performance drawback (~5% imgs/sec).

  1. It may be unnecessary to change it back to the original implementation, since the differences are negligible(actually with accuracy increasing). Besides, changing it may affect the previously pre-trained models' performance.

After all, some comments may be needed in resnet.py to explain this situation as well as to close this issue and prevent similar issues in the future. How do you think? @fmassa If needed, I can open a PR for it.

Dirtybluer avatar Mar 11 '20 16:03 Dirtybluer

@Dirtybluer a PR adding some comments to the resnet code would be great!

fmassa avatar Mar 13 '20 13:03 fmassa

Perhaps we could have a (say) v1_downsampling=False argument to choose the v1 implementation? This is particularly useful if you want to reproduce as closely as possible a paper which uses a v1 resnet backbone for something.

Of course, you could cook a script yourself to hack a resnet instance to move the downsampling to the 1x1 convolution, but I think it would be better if everyone could rely on this being implemented consistently.

What do you think? If the above sounds reasonable, I can throw a PR.

alegonz avatar May 27 '20 13:05 alegonz

Just FYI, training a Resnet34 model on CIFAR10 gives much worse performance when done with torchvision's version:

  • with torchvision's ResNet34 I am getting up to 89% accuracy on CIFAR10 (trained as in https://github.com/kuangliu/pytorch-cifar)
  • by using Kuangliu's version, I am easily getting 93% accuracy.

image

I was struggling to reproduce CIFAR10 results as I assumed the performance should be similar between the two repos.

chledowski avatar Sep 17 '22 23:09 chledowski

Just FYI, training a Resnet34 model on CIFAR10 gives much worse performance when done with torchvision's version:

  • with torchvision's ResNet34 I am getting up to 89% accuracy on CIFAR10 (trained as in https://github.com/kuangliu/pytorch-cifar)
  • by using Kuangliu's version, I am easily getting 93% accuracy.

@chledowski The input image size of CIFAR10 is much smaller than ImageNet, I guess you can prune off one layer of TorchVision's model to get similar results of Kuang Liu's, and seems that's tricks behind Liu's repo.

zhiqwang avatar Sep 18 '22 01:09 zhiqwang

Thanks for the info! You're right, I just read that the first CNN layer in torchvision has kernel of size 7, stride 2, and padding 3, while Kuang Liu uses kernel 3, stride 1 & no padding I think.

chledowski avatar Sep 18 '22 07:09 chledowski

Just FYI, training a Resnet34 model on CIFAR10 gives much worse performance when done with torchvision's version:

  • with torchvision's ResNet34 I am getting up to 89% accuracy on CIFAR10 (trained as in https://github.com/kuangliu/pytorch-cifar)
  • by using Kuangliu's version, I am easily getting 93% accuracy.

image

I was struggling to reproduce CIFAR10 results as I assumed the performance should be similar between the two repos.

Same here on CIFAR100. It makes me frustrated with hyperparam tuning, and I can't find where the problem is.

image

The cyan one is the implementation from: https://github.com/weiaicunzai/pytorch-cifar100, and the pink one is the pytorch's implementation.

youyinnn avatar Jan 21 '24 18:01 youyinnn